Test Report: KVM_Linux_crio 17598

                    
                      6b0f6a676b64ff92a44d6b619fa30804f7878b3f:2023-11-14:31877
                    
                

Test fail (28/292)

Order failed test Duration
27 TestAddons/parallel/Registry 24.27
28 TestAddons/parallel/Ingress 162.99
41 TestAddons/StoppedEnableDisable 155.61
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.05
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 166.94
205 TestMultiNode/serial/PingHostFrom2Pods 3.24
211 TestMultiNode/serial/RestartKeepsNodes 690.15
213 TestMultiNode/serial/StopMultiNode 143.59
220 TestPreload 278.96
226 TestRunningBinaryUpgrade 144.04
245 TestStoppedBinaryUpgrade/Upgrade 281.76
265 TestPause/serial/SecondStartNoReconfiguration 107.89
319 TestStartStop/group/no-preload/serial/Stop 140.02
322 TestStartStop/group/embed-certs/serial/Stop 139.42
325 TestStartStop/group/old-k8s-version/serial/Stop 139.95
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.79
329 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
333 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.28
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.31
339 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.23
340 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.21
341 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 392.13
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 414.69
343 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 308.18
344 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 238.56
x
+
TestAddons/parallel/Registry (24.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 26.664402ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.017882418s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018520245s
addons_test.go:339: (dbg) Run:  kubectl --context addons-317784 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-317784 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-317784 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.704397632s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 ip
2023/11/14 14:41:55 [DEBUG] GET http://192.168.39.16:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 addons disable registry --alsologtostderr -v=1
addons_test.go:387: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-317784 addons disable registry --alsologtostderr -v=1: exit status 11 (419.998416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:41:55.563290  833673 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:41:55.563436  833673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:41:55.563448  833673 out.go:309] Setting ErrFile to fd 2...
	I1114 14:41:55.563456  833673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:41:55.563639  833673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 14:41:55.563911  833673 mustload.go:65] Loading cluster: addons-317784
	I1114 14:41:55.564282  833673 config.go:182] Loaded profile config "addons-317784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:41:55.564306  833673 addons.go:594] checking whether the cluster is paused
	I1114 14:41:55.564394  833673 config.go:182] Loaded profile config "addons-317784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:41:55.564407  833673 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:41:55.564782  833673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:41:55.564850  833673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:41:55.579476  833673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I1114 14:41:55.580027  833673 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:41:55.580589  833673 main.go:141] libmachine: Using API Version  1
	I1114 14:41:55.580616  833673 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:41:55.580984  833673 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:41:55.581201  833673 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:41:55.582957  833673 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:41:55.583182  833673 ssh_runner.go:195] Run: systemctl --version
	I1114 14:41:55.583201  833673 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:41:55.585873  833673 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:41:55.586418  833673 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:41:55.586457  833673 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:41:55.586617  833673 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:41:55.586768  833673 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:41:55.586938  833673 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:41:55.587045  833673 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:41:55.698244  833673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 14:41:55.698338  833673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 14:41:55.824783  833673 cri.go:89] found id: "68fe1d59991055a8df480aa58562cbbcd45a3c5fa21e4f0f4230cccad516ec5e"
	I1114 14:41:55.824812  833673 cri.go:89] found id: "6d1955ef47b5a9aaf706a32ecf9f9a5a26ed07244486e8c98a5704d0d1064555"
	I1114 14:41:55.824826  833673 cri.go:89] found id: "c9eda30492e5fa9258718e37c889a5888100889cc239d2f75bb07b696854db7a"
	I1114 14:41:55.824833  833673 cri.go:89] found id: "e9a601e09b1d581217a534ad0b3018dbea455230fdedf899299ad4644ebae16b"
	I1114 14:41:55.824838  833673 cri.go:89] found id: "554ca1db390278ba8653219550a19585378c02537c8ca104c43f9a5897d17080"
	I1114 14:41:55.824844  833673 cri.go:89] found id: "4277a3ed9ccf549d22c8ee025bb9c5eadd8bb8c47ae5397c4fc2819e2caaf694"
	I1114 14:41:55.824853  833673 cri.go:89] found id: "18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28"
	I1114 14:41:55.824858  833673 cri.go:89] found id: "394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7"
	I1114 14:41:55.824862  833673 cri.go:89] found id: "d39a2a899a9e4f0983b1cfbf7c20c25550450f8524a937e6670c0890183bac29"
	I1114 14:41:55.824881  833673 cri.go:89] found id: "7c1c53ddd12dad9f5798577dacab6815144c6d4735e7f7122ccdac3c25276ddc"
	I1114 14:41:55.824890  833673 cri.go:89] found id: "226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514"
	I1114 14:41:55.824899  833673 cri.go:89] found id: "2f989de2637865a8a1a67e274eb3ebec6baaa4ac0f648ba9b9e95eec8b0594a7"
	I1114 14:41:55.824907  833673 cri.go:89] found id: "1dde2185933daf6048c68d2486eb0168f5ec58201e38dc7d851e4c50d06601e2"
	I1114 14:41:55.824919  833673 cri.go:89] found id: "15b3c6d74248251c00ee108bfabc05a066f9efcc5d704dc0026d62c6908c5fc8"
	I1114 14:41:55.824927  833673 cri.go:89] found id: "d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3"
	I1114 14:41:55.824933  833673 cri.go:89] found id: "14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db"
	I1114 14:41:55.824942  833673 cri.go:89] found id: "cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653"
	I1114 14:41:55.824952  833673 cri.go:89] found id: "aa69a346cb72279384c9e23ced7bfbfba1d1c3fdd1a36049f8d4cf280b38c293"
	I1114 14:41:55.824961  833673 cri.go:89] found id: "09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4"
	I1114 14:41:55.824967  833673 cri.go:89] found id: "ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf"
	I1114 14:41:55.824990  833673 cri.go:89] found id: "c1f9b0cc72b7becbdf494fe2748caed70a4e53672c513c7b0ff2fe2eb2e4fb02"
	I1114 14:41:55.825011  833673 cri.go:89] found id: "505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887"
	I1114 14:41:55.825017  833673 cri.go:89] found id: "dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539"
	I1114 14:41:55.825022  833673 cri.go:89] found id: ""
	I1114 14:41:55.825083  833673 ssh_runner.go:195] Run: sudo runc list -f json
	I1114 14:41:55.910483  833673 main.go:141] libmachine: Making call to close driver server
	I1114 14:41:55.910523  833673 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:41:55.910894  833673 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:41:55.910918  833673 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:41:55.910952  833673 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:41:55.913148  833673 out.go:177] 
	W1114 14:41:55.914557  833673 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-14T14:41:55Z" level=error msg="stat /run/runc/9b08bc7254392dda41d4c5fbdb449402b15952f58c023163341bbd06b67f7c4c: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-14T14:41:55Z" level=error msg="stat /run/runc/9b08bc7254392dda41d4c5fbdb449402b15952f58c023163341bbd06b67f7c4c: no such file or directory"
	
	W1114 14:41:55.914582  833673 out.go:239] * 
	* 
	W1114 14:41:55.919738  833673 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 14:41:55.921078  833673 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:389: failed to disable registry addon. args "out/minikube-linux-amd64 -p addons-317784 addons disable registry --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-317784 -n addons-317784
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-317784 logs -n 25: (1.979521197s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:38 UTC |                     |
	|         | -p download-only-430804              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |                     |
	|         | -p download-only-430804              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:39 UTC |
	| delete  | -p download-only-430804              | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:39 UTC |
	| delete  | -p download-only-430804              | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:39 UTC |
	| start   | --download-only -p                   | binary-mirror-886653 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |                     |
	|         | binary-mirror-886653                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44247               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-886653              | binary-mirror-886653 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:39 UTC |
	| addons  | enable dashboard -p                  | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |                     |
	|         | addons-317784                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |                     |
	|         | addons-317784                        |                      |         |         |                     |                     |
	| start   | -p addons-317784 --wait=true         | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC | 14 Nov 23 14:41 UTC |
	|         | -p addons-317784                     |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC | 14 Nov 23 14:41 UTC |
	|         | -p addons-317784                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-317784 addons disable         | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC | 14 Nov 23 14:41 UTC |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| ssh     | addons-317784 ssh curl -s            | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-317784 ip                     | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC | 14 Nov 23 14:41 UTC |
	| addons  | addons-317784 addons disable         | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC |                     |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 14:39:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 14:39:09.052952  832572 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:39:09.053099  832572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:39:09.053107  832572 out.go:309] Setting ErrFile to fd 2...
	I1114 14:39:09.053115  832572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:39:09.053344  832572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 14:39:09.053994  832572 out.go:303] Setting JSON to false
	I1114 14:39:09.055580  832572 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":40901,"bootTime":1699931848,"procs":894,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 14:39:09.055673  832572 start.go:138] virtualization: kvm guest
	I1114 14:39:09.058095  832572 out.go:177] * [addons-317784] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 14:39:09.059991  832572 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 14:39:09.059980  832572 notify.go:220] Checking for updates...
	I1114 14:39:09.061689  832572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:39:09.063158  832572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:39:09.064456  832572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:39:09.065697  832572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 14:39:09.066965  832572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:39:09.068482  832572 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:39:09.099552  832572 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 14:39:09.100844  832572 start.go:298] selected driver: kvm2
	I1114 14:39:09.100859  832572 start.go:902] validating driver "kvm2" against <nil>
	I1114 14:39:09.100873  832572 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:39:09.101844  832572 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:39:09.102024  832572 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 14:39:09.116399  832572 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 14:39:09.116466  832572 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 14:39:09.116719  832572 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 14:39:09.116815  832572 cni.go:84] Creating CNI manager for ""
	I1114 14:39:09.116832  832572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:39:09.116848  832572 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1114 14:39:09.116860  832572 start_flags.go:323] config:
	{Name:addons-317784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-317784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:39:09.117040  832572 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:39:09.118890  832572 out.go:177] * Starting control plane node addons-317784 in cluster addons-317784
	I1114 14:39:09.120112  832572 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:39:09.120149  832572 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 14:39:09.120163  832572 cache.go:56] Caching tarball of preloaded images
	I1114 14:39:09.120252  832572 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 14:39:09.120266  832572 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 14:39:09.120699  832572 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/config.json ...
	I1114 14:39:09.120727  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/config.json: {Name:mk6b3b140c9356d26ddf8c22aad8ca9884759df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:09.120911  832572 start.go:365] acquiring machines lock for addons-317784: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 14:39:09.120983  832572 start.go:369] acquired machines lock for "addons-317784" in 55.229µs
	I1114 14:39:09.121013  832572 start.go:93] Provisioning new machine with config: &{Name:addons-317784 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:addons-317784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 14:39:09.121079  832572 start.go:125] createHost starting for "" (driver="kvm2")
	I1114 14:39:09.122843  832572 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1114 14:39:09.122976  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:39:09.123025  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:39:09.136101  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
	I1114 14:39:09.136557  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:39:09.137545  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:39:09.137579  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:39:09.138719  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:39:09.138940  832572 main.go:141] libmachine: (addons-317784) Calling .GetMachineName
	I1114 14:39:09.139149  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:09.139321  832572 start.go:159] libmachine.API.Create for "addons-317784" (driver="kvm2")
	I1114 14:39:09.139375  832572 client.go:168] LocalClient.Create starting
	I1114 14:39:09.139465  832572 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 14:39:09.197124  832572 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 14:39:09.320157  832572 main.go:141] libmachine: Running pre-create checks...
	I1114 14:39:09.320183  832572 main.go:141] libmachine: (addons-317784) Calling .PreCreateCheck
	I1114 14:39:09.320787  832572 main.go:141] libmachine: (addons-317784) Calling .GetConfigRaw
	I1114 14:39:09.321224  832572 main.go:141] libmachine: Creating machine...
	I1114 14:39:09.321242  832572 main.go:141] libmachine: (addons-317784) Calling .Create
	I1114 14:39:09.321395  832572 main.go:141] libmachine: (addons-317784) Creating KVM machine...
	I1114 14:39:09.322762  832572 main.go:141] libmachine: (addons-317784) DBG | found existing default KVM network
	I1114 14:39:09.323495  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.323343  832594 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1114 14:39:09.329312  832572 main.go:141] libmachine: (addons-317784) DBG | trying to create private KVM network mk-addons-317784 192.168.39.0/24...
	I1114 14:39:09.400884  832572 main.go:141] libmachine: (addons-317784) DBG | private KVM network mk-addons-317784 192.168.39.0/24 created
	I1114 14:39:09.400927  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.400867  832594 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:39:09.400951  832572 main.go:141] libmachine: (addons-317784) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784 ...
	I1114 14:39:09.400969  832572 main.go:141] libmachine: (addons-317784) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 14:39:09.401061  832572 main.go:141] libmachine: (addons-317784) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 14:39:09.632257  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.632134  832594 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa...
	I1114 14:39:09.733804  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.733640  832594 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/addons-317784.rawdisk...
	I1114 14:39:09.733859  832572 main.go:141] libmachine: (addons-317784) DBG | Writing magic tar header
	I1114 14:39:09.733875  832572 main.go:141] libmachine: (addons-317784) DBG | Writing SSH key tar header
	I1114 14:39:09.733885  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.733814  832594 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784 ...
	I1114 14:39:09.734045  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784
	I1114 14:39:09.734080  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 14:39:09.734094  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784 (perms=drwx------)
	I1114 14:39:09.734111  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 14:39:09.734127  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 14:39:09.734141  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:39:09.734160  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 14:39:09.734186  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 14:39:09.734196  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 14:39:09.734204  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins
	I1114 14:39:09.734215  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home
	I1114 14:39:09.734226  832572 main.go:141] libmachine: (addons-317784) DBG | Skipping /home - not owner
	I1114 14:39:09.734236  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 14:39:09.734242  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 14:39:09.734265  832572 main.go:141] libmachine: (addons-317784) Creating domain...
	I1114 14:39:09.735388  832572 main.go:141] libmachine: (addons-317784) define libvirt domain using xml: 
	I1114 14:39:09.735404  832572 main.go:141] libmachine: (addons-317784) <domain type='kvm'>
	I1114 14:39:09.735411  832572 main.go:141] libmachine: (addons-317784)   <name>addons-317784</name>
	I1114 14:39:09.735417  832572 main.go:141] libmachine: (addons-317784)   <memory unit='MiB'>4000</memory>
	I1114 14:39:09.735423  832572 main.go:141] libmachine: (addons-317784)   <vcpu>2</vcpu>
	I1114 14:39:09.735428  832572 main.go:141] libmachine: (addons-317784)   <features>
	I1114 14:39:09.735440  832572 main.go:141] libmachine: (addons-317784)     <acpi/>
	I1114 14:39:09.735481  832572 main.go:141] libmachine: (addons-317784)     <apic/>
	I1114 14:39:09.735497  832572 main.go:141] libmachine: (addons-317784)     <pae/>
	I1114 14:39:09.735503  832572 main.go:141] libmachine: (addons-317784)     
	I1114 14:39:09.735508  832572 main.go:141] libmachine: (addons-317784)   </features>
	I1114 14:39:09.735514  832572 main.go:141] libmachine: (addons-317784)   <cpu mode='host-passthrough'>
	I1114 14:39:09.735519  832572 main.go:141] libmachine: (addons-317784)   
	I1114 14:39:09.735525  832572 main.go:141] libmachine: (addons-317784)   </cpu>
	I1114 14:39:09.735530  832572 main.go:141] libmachine: (addons-317784)   <os>
	I1114 14:39:09.735540  832572 main.go:141] libmachine: (addons-317784)     <type>hvm</type>
	I1114 14:39:09.735549  832572 main.go:141] libmachine: (addons-317784)     <boot dev='cdrom'/>
	I1114 14:39:09.735559  832572 main.go:141] libmachine: (addons-317784)     <boot dev='hd'/>
	I1114 14:39:09.735577  832572 main.go:141] libmachine: (addons-317784)     <bootmenu enable='no'/>
	I1114 14:39:09.735594  832572 main.go:141] libmachine: (addons-317784)   </os>
	I1114 14:39:09.735603  832572 main.go:141] libmachine: (addons-317784)   <devices>
	I1114 14:39:09.735611  832572 main.go:141] libmachine: (addons-317784)     <disk type='file' device='cdrom'>
	I1114 14:39:09.735621  832572 main.go:141] libmachine: (addons-317784)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/boot2docker.iso'/>
	I1114 14:39:09.735631  832572 main.go:141] libmachine: (addons-317784)       <target dev='hdc' bus='scsi'/>
	I1114 14:39:09.735642  832572 main.go:141] libmachine: (addons-317784)       <readonly/>
	I1114 14:39:09.735654  832572 main.go:141] libmachine: (addons-317784)     </disk>
	I1114 14:39:09.735667  832572 main.go:141] libmachine: (addons-317784)     <disk type='file' device='disk'>
	I1114 14:39:09.735683  832572 main.go:141] libmachine: (addons-317784)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 14:39:09.735709  832572 main.go:141] libmachine: (addons-317784)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/addons-317784.rawdisk'/>
	I1114 14:39:09.735721  832572 main.go:141] libmachine: (addons-317784)       <target dev='hda' bus='virtio'/>
	I1114 14:39:09.735727  832572 main.go:141] libmachine: (addons-317784)     </disk>
	I1114 14:39:09.735735  832572 main.go:141] libmachine: (addons-317784)     <interface type='network'>
	I1114 14:39:09.735746  832572 main.go:141] libmachine: (addons-317784)       <source network='mk-addons-317784'/>
	I1114 14:39:09.735760  832572 main.go:141] libmachine: (addons-317784)       <model type='virtio'/>
	I1114 14:39:09.735770  832572 main.go:141] libmachine: (addons-317784)     </interface>
	I1114 14:39:09.735783  832572 main.go:141] libmachine: (addons-317784)     <interface type='network'>
	I1114 14:39:09.735796  832572 main.go:141] libmachine: (addons-317784)       <source network='default'/>
	I1114 14:39:09.735808  832572 main.go:141] libmachine: (addons-317784)       <model type='virtio'/>
	I1114 14:39:09.735837  832572 main.go:141] libmachine: (addons-317784)     </interface>
	I1114 14:39:09.735859  832572 main.go:141] libmachine: (addons-317784)     <serial type='pty'>
	I1114 14:39:09.735877  832572 main.go:141] libmachine: (addons-317784)       <target port='0'/>
	I1114 14:39:09.735887  832572 main.go:141] libmachine: (addons-317784)     </serial>
	I1114 14:39:09.735897  832572 main.go:141] libmachine: (addons-317784)     <console type='pty'>
	I1114 14:39:09.735908  832572 main.go:141] libmachine: (addons-317784)       <target type='serial' port='0'/>
	I1114 14:39:09.735922  832572 main.go:141] libmachine: (addons-317784)     </console>
	I1114 14:39:09.735934  832572 main.go:141] libmachine: (addons-317784)     <rng model='virtio'>
	I1114 14:39:09.735949  832572 main.go:141] libmachine: (addons-317784)       <backend model='random'>/dev/random</backend>
	I1114 14:39:09.735960  832572 main.go:141] libmachine: (addons-317784)     </rng>
	I1114 14:39:09.735973  832572 main.go:141] libmachine: (addons-317784)     
	I1114 14:39:09.735983  832572 main.go:141] libmachine: (addons-317784)     
	I1114 14:39:09.735993  832572 main.go:141] libmachine: (addons-317784)   </devices>
	I1114 14:39:09.736005  832572 main.go:141] libmachine: (addons-317784) </domain>
	I1114 14:39:09.736020  832572 main.go:141] libmachine: (addons-317784) 
	I1114 14:39:09.740496  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:13:d7:49 in network default
	I1114 14:39:09.741231  832572 main.go:141] libmachine: (addons-317784) Ensuring networks are active...
	I1114 14:39:09.741254  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:09.741901  832572 main.go:141] libmachine: (addons-317784) Ensuring network default is active
	I1114 14:39:09.742192  832572 main.go:141] libmachine: (addons-317784) Ensuring network mk-addons-317784 is active
	I1114 14:39:09.742683  832572 main.go:141] libmachine: (addons-317784) Getting domain xml...
	I1114 14:39:09.743391  832572 main.go:141] libmachine: (addons-317784) Creating domain...
	I1114 14:39:10.966555  832572 main.go:141] libmachine: (addons-317784) Waiting to get IP...
	I1114 14:39:10.967271  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:10.967816  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:10.967868  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:10.967797  832594 retry.go:31] will retry after 240.12088ms: waiting for machine to come up
	I1114 14:39:11.209223  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:11.209636  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:11.209675  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:11.209595  832594 retry.go:31] will retry after 309.483531ms: waiting for machine to come up
	I1114 14:39:11.521270  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:11.521697  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:11.521733  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:11.521637  832594 retry.go:31] will retry after 471.628216ms: waiting for machine to come up
	I1114 14:39:11.995203  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:11.995798  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:11.995829  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:11.995751  832594 retry.go:31] will retry after 519.057067ms: waiting for machine to come up
	I1114 14:39:12.516423  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:12.516898  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:12.516932  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:12.516825  832594 retry.go:31] will retry after 718.762554ms: waiting for machine to come up
	I1114 14:39:13.236753  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:13.237201  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:13.237236  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:13.237133  832594 retry.go:31] will retry after 811.725044ms: waiting for machine to come up
	I1114 14:39:14.050163  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:14.050638  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:14.050671  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:14.050577  832594 retry.go:31] will retry after 913.225481ms: waiting for machine to come up
	I1114 14:39:14.965344  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:14.965842  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:14.965875  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:14.965748  832594 retry.go:31] will retry after 999.497751ms: waiting for machine to come up
	I1114 14:39:15.966960  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:15.967359  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:15.967389  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:15.967308  832594 retry.go:31] will retry after 1.790301588s: waiting for machine to come up
	I1114 14:39:17.760304  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:17.760777  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:17.760811  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:17.760705  832594 retry.go:31] will retry after 1.793227337s: waiting for machine to come up
	I1114 14:39:19.556092  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:19.556536  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:19.556570  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:19.556495  832594 retry.go:31] will retry after 2.414609963s: waiting for machine to come up
	I1114 14:39:21.974452  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:21.975013  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:21.975050  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:21.974955  832594 retry.go:31] will retry after 3.059180002s: waiting for machine to come up
	I1114 14:39:25.035634  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:25.036086  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:25.036111  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:25.036043  832594 retry.go:31] will retry after 3.834961778s: waiting for machine to come up
	I1114 14:39:28.876050  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:28.876510  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:28.876534  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:28.876466  832594 retry.go:31] will retry after 3.579833892s: waiting for machine to come up
	I1114 14:39:32.460168  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.460685  832572 main.go:141] libmachine: (addons-317784) Found IP for machine: 192.168.39.16
	I1114 14:39:32.460711  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has current primary IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.460720  832572 main.go:141] libmachine: (addons-317784) Reserving static IP address...
	I1114 14:39:32.461141  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find host DHCP lease matching {name: "addons-317784", mac: "52:54:00:0f:c8:7d", ip: "192.168.39.16"} in network mk-addons-317784
	I1114 14:39:32.536153  832572 main.go:141] libmachine: (addons-317784) DBG | Getting to WaitForSSH function...
	I1114 14:39:32.536188  832572 main.go:141] libmachine: (addons-317784) Reserved static IP address: 192.168.39.16
	I1114 14:39:32.536203  832572 main.go:141] libmachine: (addons-317784) Waiting for SSH to be available...
	I1114 14:39:32.538829  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.539235  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.539264  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.539438  832572 main.go:141] libmachine: (addons-317784) DBG | Using SSH client type: external
	I1114 14:39:32.539470  832572 main.go:141] libmachine: (addons-317784) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa (-rw-------)
	I1114 14:39:32.539532  832572 main.go:141] libmachine: (addons-317784) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 14:39:32.539565  832572 main.go:141] libmachine: (addons-317784) DBG | About to run SSH command:
	I1114 14:39:32.539580  832572 main.go:141] libmachine: (addons-317784) DBG | exit 0
	I1114 14:39:32.624235  832572 main.go:141] libmachine: (addons-317784) DBG | SSH cmd err, output: <nil>: 
	I1114 14:39:32.624510  832572 main.go:141] libmachine: (addons-317784) KVM machine creation complete!
	I1114 14:39:32.624857  832572 main.go:141] libmachine: (addons-317784) Calling .GetConfigRaw
	I1114 14:39:32.625389  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:32.625671  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:32.625827  832572 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 14:39:32.625842  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:39:32.627221  832572 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 14:39:32.627243  832572 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 14:39:32.627252  832572 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 14:39:32.627261  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:32.629388  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.629756  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.629787  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.629877  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:32.630096  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.630256  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.630477  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:32.630671  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:32.631015  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:32.631027  832572 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 14:39:32.739878  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:39:32.739905  832572 main.go:141] libmachine: Detecting the provisioner...
	I1114 14:39:32.739914  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:32.742635  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.742985  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.743013  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.743281  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:32.743492  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.743690  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.743815  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:32.744017  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:32.744429  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:32.744446  832572 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 14:39:32.853399  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 14:39:32.853507  832572 main.go:141] libmachine: found compatible host: buildroot
	I1114 14:39:32.853519  832572 main.go:141] libmachine: Provisioning with buildroot...
	I1114 14:39:32.853529  832572 main.go:141] libmachine: (addons-317784) Calling .GetMachineName
	I1114 14:39:32.853921  832572 buildroot.go:166] provisioning hostname "addons-317784"
	I1114 14:39:32.853957  832572 main.go:141] libmachine: (addons-317784) Calling .GetMachineName
	I1114 14:39:32.854188  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:32.856942  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.857316  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.857345  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.857497  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:32.857689  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.857833  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.857992  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:32.858148  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:32.858516  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:32.858530  832572 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-317784 && echo "addons-317784" | sudo tee /etc/hostname
	I1114 14:39:32.982771  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-317784
	
	I1114 14:39:32.982803  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:32.985627  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.985977  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.986009  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.986215  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:32.986410  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.986610  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.986756  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:32.986954  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:32.987305  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:32.987330  832572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-317784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-317784/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-317784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:39:33.104338  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:39:33.104381  832572 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 14:39:33.104415  832572 buildroot.go:174] setting up certificates
	I1114 14:39:33.104430  832572 provision.go:83] configureAuth start
	I1114 14:39:33.104450  832572 main.go:141] libmachine: (addons-317784) Calling .GetMachineName
	I1114 14:39:33.104806  832572 main.go:141] libmachine: (addons-317784) Calling .GetIP
	I1114 14:39:33.107688  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.108079  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.108117  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.108223  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.110524  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.110806  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.110834  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.111000  832572 provision.go:138] copyHostCerts
	I1114 14:39:33.111084  832572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 14:39:33.111224  832572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 14:39:33.111285  832572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 14:39:33.111329  832572 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.addons-317784 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube addons-317784]
	I1114 14:39:33.207568  832572 provision.go:172] copyRemoteCerts
	I1114 14:39:33.207622  832572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:39:33.207646  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.210319  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.210741  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.210773  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.210969  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.211169  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.211310  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.211477  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:39:33.293933  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:39:33.314955  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1114 14:39:33.335727  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 14:39:33.356707  832572 provision.go:86] duration metric: configureAuth took 252.258663ms
	I1114 14:39:33.356734  832572 buildroot.go:189] setting minikube options for container-runtime
	I1114 14:39:33.356963  832572 config.go:182] Loaded profile config "addons-317784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:39:33.357055  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.359795  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.360126  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.360152  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.360312  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.360521  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.360669  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.360822  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.360972  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:33.361352  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:33.361382  832572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:39:33.670489  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:39:33.670520  832572 main.go:141] libmachine: Checking connection to Docker...
	I1114 14:39:33.670548  832572 main.go:141] libmachine: (addons-317784) Calling .GetURL
	I1114 14:39:33.671804  832572 main.go:141] libmachine: (addons-317784) DBG | Using libvirt version 6000000
	I1114 14:39:33.673934  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.674288  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.674325  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.674474  832572 main.go:141] libmachine: Docker is up and running!
	I1114 14:39:33.674503  832572 main.go:141] libmachine: Reticulating splines...
	I1114 14:39:33.674514  832572 client.go:171] LocalClient.Create took 24.535124455s
	I1114 14:39:33.674562  832572 start.go:167] duration metric: libmachine.API.Create for "addons-317784" took 24.535243508s
	I1114 14:39:33.674584  832572 start.go:300] post-start starting for "addons-317784" (driver="kvm2")
	I1114 14:39:33.674599  832572 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:39:33.674625  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.674895  832572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:39:33.674920  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.677074  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.677421  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.677448  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.677537  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.677724  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.677888  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.678030  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:39:33.766858  832572 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:39:33.771189  832572 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 14:39:33.771219  832572 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 14:39:33.771284  832572 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 14:39:33.771314  832572 start.go:303] post-start completed in 96.722326ms
	I1114 14:39:33.771363  832572 main.go:141] libmachine: (addons-317784) Calling .GetConfigRaw
	I1114 14:39:33.772065  832572 main.go:141] libmachine: (addons-317784) Calling .GetIP
	I1114 14:39:33.775136  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.775548  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.775583  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.775865  832572 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/config.json ...
	I1114 14:39:33.776034  832572 start.go:128] duration metric: createHost completed in 24.654943759s
	I1114 14:39:33.776059  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.778316  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.778651  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.778698  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.778780  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.778969  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.779136  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.779315  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.779458  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:33.779834  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:33.779846  832572 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 14:39:33.893471  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699972773.873976570
	
	I1114 14:39:33.893503  832572 fix.go:206] guest clock: 1699972773.873976570
	I1114 14:39:33.893513  832572 fix.go:219] Guest: 2023-11-14 14:39:33.87397657 +0000 UTC Remote: 2023-11-14 14:39:33.776046379 +0000 UTC m=+24.772082453 (delta=97.930191ms)
	I1114 14:39:33.893566  832572 fix.go:190] guest clock delta is within tolerance: 97.930191ms
	I1114 14:39:33.893577  832572 start.go:83] releasing machines lock for "addons-317784", held for 24.772577516s
	I1114 14:39:33.893611  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.893956  832572 main.go:141] libmachine: (addons-317784) Calling .GetIP
	I1114 14:39:33.896411  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.896869  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.896901  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.897066  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.897589  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.897761  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.897851  832572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:39:33.897885  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.898165  832572 ssh_runner.go:195] Run: cat /version.json
	I1114 14:39:33.898194  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.900960  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.901230  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.901350  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.901378  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.901503  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.901613  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.901643  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.901669  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.901767  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.901848  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.901953  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.901983  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:39:33.902121  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.902261  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:39:34.008354  832572 ssh_runner.go:195] Run: systemctl --version
	I1114 14:39:34.014133  832572 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 14:39:34.172934  832572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 14:39:34.178768  832572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 14:39:34.178843  832572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:39:34.194373  832572 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:39:34.194400  832572 start.go:472] detecting cgroup driver to use...
	I1114 14:39:34.194468  832572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:39:34.208205  832572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:39:34.221108  832572 docker.go:203] disabling cri-docker service (if available) ...
	I1114 14:39:34.221178  832572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 14:39:34.234144  832572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 14:39:34.247071  832572 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 14:39:34.346956  832572 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 14:39:34.465035  832572 docker.go:219] disabling docker service ...
	I1114 14:39:34.465112  832572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 14:39:34.478789  832572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 14:39:34.490653  832572 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 14:39:34.591474  832572 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 14:39:34.690445  832572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 14:39:34.704413  832572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:39:34.721857  832572 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 14:39:34.721931  832572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:39:34.732055  832572 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 14:39:34.732141  832572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:39:34.742890  832572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:39:34.753224  832572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:39:34.763611  832572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 14:39:34.774398  832572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 14:39:34.783783  832572 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 14:39:34.783843  832572 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 14:39:34.797725  832572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 14:39:34.807353  832572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:39:34.905313  832572 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 14:39:35.350749  832572 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 14:39:35.350861  832572 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 14:39:35.359838  832572 start.go:540] Will wait 60s for crictl version
	I1114 14:39:35.359946  832572 ssh_runner.go:195] Run: which crictl
	I1114 14:39:35.363920  832572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 14:39:35.408965  832572 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 14:39:35.409074  832572 ssh_runner.go:195] Run: crio --version
	I1114 14:39:35.462767  832572 ssh_runner.go:195] Run: crio --version
	I1114 14:39:35.597286  832572 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 14:39:35.660470  832572 main.go:141] libmachine: (addons-317784) Calling .GetIP
	I1114 14:39:35.663541  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:35.663855  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:35.663896  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:35.664143  832572 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 14:39:35.668802  832572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:39:35.681489  832572 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:39:35.681562  832572 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 14:39:35.716328  832572 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 14:39:35.716414  832572 ssh_runner.go:195] Run: which lz4
	I1114 14:39:35.720308  832572 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 14:39:35.724398  832572 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 14:39:35.724438  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 14:39:37.393100  832572 crio.go:444] Took 1.672847 seconds to copy over tarball
	I1114 14:39:37.393191  832572 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 14:39:40.427675  832572 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034439248s)
	I1114 14:39:40.427707  832572 crio.go:451] Took 3.034578 seconds to extract the tarball
	I1114 14:39:40.427720  832572 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 14:39:40.471741  832572 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 14:39:40.543099  832572 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 14:39:40.543134  832572 cache_images.go:84] Images are preloaded, skipping loading
	I1114 14:39:40.543215  832572 ssh_runner.go:195] Run: crio config
	I1114 14:39:40.603708  832572 cni.go:84] Creating CNI manager for ""
	I1114 14:39:40.603742  832572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:39:40.603770  832572 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 14:39:40.603829  832572 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-317784 NodeName:addons-317784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 14:39:40.603980  832572 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-317784"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 14:39:40.604068  832572 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-317784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-317784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 14:39:40.604141  832572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 14:39:40.614515  832572 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 14:39:40.614605  832572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 14:39:40.624056  832572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1114 14:39:40.639756  832572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 14:39:40.655596  832572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1114 14:39:40.671574  832572 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I1114 14:39:40.675461  832572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:39:40.687577  832572 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784 for IP: 192.168.39.16
	I1114 14:39:40.687627  832572 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:40.687803  832572 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 14:39:40.831419  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt ...
	I1114 14:39:40.831452  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt: {Name:mk2728f1a821bdf3e5ec632580089d84c6352049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:40.831617  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key ...
	I1114 14:39:40.831628  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key: {Name:mk5a59ca238d6d31d365882787f287599b8d399e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:40.831726  832572 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 14:39:41.013453  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt ...
	I1114 14:39:41.013486  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt: {Name:mk257c9eb23f7fbdaa001814b4fedd5597f62c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.013649  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key ...
	I1114 14:39:41.013660  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key: {Name:mk80a0bfef16c16f5e90197d89766bc78fe11e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.013767  832572 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.key
	I1114 14:39:41.013781  832572 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt with IP's: []
	I1114 14:39:41.351574  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt ...
	I1114 14:39:41.351619  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: {Name:mk6ffe80523732e40b0dbc0fa24ca3f3c47bb6df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.351812  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.key ...
	I1114 14:39:41.351833  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.key: {Name:mke16ac5b9415fb2c28046a3c54ebef7d6735ac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.351929  832572 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key.5918fcb3
	I1114 14:39:41.351950  832572 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt.5918fcb3 with IP's: [192.168.39.16 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 14:39:41.665558  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt.5918fcb3 ...
	I1114 14:39:41.665604  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt.5918fcb3: {Name:mk80228808592cbc215c7a6c53604575d45b4bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.665789  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key.5918fcb3 ...
	I1114 14:39:41.665813  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key.5918fcb3: {Name:mkb8c0c446598f67922cc617138e3a1d13df8389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.665916  832572 certs.go:337] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt.5918fcb3 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt
	I1114 14:39:41.666011  832572 certs.go:341] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key.5918fcb3 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key
	I1114 14:39:41.666072  832572 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.key
	I1114 14:39:41.666094  832572 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.crt with IP's: []
	I1114 14:39:41.709495  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.crt ...
	I1114 14:39:41.709533  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.crt: {Name:mk15b77f79a171ab28b594321bab6aa741d4d2b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.709734  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.key ...
	I1114 14:39:41.709756  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.key: {Name:mkd742e996dcfdb2f8fc372bf4af5205735a5e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.710008  832572 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 14:39:41.710061  832572 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 14:39:41.710104  832572 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 14:39:41.710145  832572 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 14:39:41.710915  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 14:39:41.736487  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 14:39:41.763054  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 14:39:41.787250  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 14:39:41.809772  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 14:39:41.832415  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 14:39:41.854754  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 14:39:41.877286  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 14:39:41.899728  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 14:39:41.922287  832572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 14:39:41.937592  832572 ssh_runner.go:195] Run: openssl version
	I1114 14:39:41.943055  832572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 14:39:41.952905  832572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:39:41.957341  832572 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:39:41.957401  832572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:39:41.962715  832572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 14:39:41.972731  832572 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 14:39:41.976749  832572 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:39:41.976805  832572 kubeadm.go:404] StartCluster: {Name:addons-317784 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.3 ClusterName:addons-317784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:39:41.976887  832572 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 14:39:41.976934  832572 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 14:39:42.013381  832572 cri.go:89] found id: ""
	I1114 14:39:42.013476  832572 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 14:39:42.023048  832572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 14:39:42.032155  832572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 14:39:42.041692  832572 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 14:39:42.041749  832572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 14:39:42.096674  832572 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 14:39:42.096806  832572 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 14:39:42.210879  832572 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 14:39:42.211021  832572 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 14:39:42.211166  832572 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 14:39:42.440989  832572 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 14:39:42.539472  832572 out.go:204]   - Generating certificates and keys ...
	I1114 14:39:42.539602  832572 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 14:39:42.539721  832572 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 14:39:42.617374  832572 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 14:39:42.951029  832572 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 14:39:43.143486  832572 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 14:39:43.571083  832572 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 14:39:43.649832  832572 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 14:39:43.650014  832572 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-317784 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I1114 14:39:43.787785  832572 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 14:39:43.788002  832572 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-317784 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I1114 14:39:44.289217  832572 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 14:39:44.786892  832572 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 14:39:45.056305  832572 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 14:39:45.056634  832572 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 14:39:45.789276  832572 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 14:39:45.903297  832572 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 14:39:46.078471  832572 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 14:39:46.160831  832572 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 14:39:46.161585  832572 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 14:39:46.165968  832572 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 14:39:46.168014  832572 out.go:204]   - Booting up control plane ...
	I1114 14:39:46.168155  832572 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 14:39:46.168286  832572 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 14:39:46.168399  832572 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 14:39:46.184689  832572 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:39:46.185577  832572 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:39:46.185737  832572 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 14:39:46.302491  832572 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 14:39:53.304300  832572 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002823 seconds
	I1114 14:39:53.304493  832572 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 14:39:53.323852  832572 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 14:39:53.856050  832572 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 14:39:53.856416  832572 kubeadm.go:322] [mark-control-plane] Marking the node addons-317784 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 14:39:54.370361  832572 kubeadm.go:322] [bootstrap-token] Using token: tt6miv.sn7glg7rnalzqd3u
	I1114 14:39:54.371827  832572 out.go:204]   - Configuring RBAC rules ...
	I1114 14:39:54.371977  832572 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 14:39:54.378021  832572 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 14:39:54.390001  832572 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 14:39:54.394109  832572 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 14:39:54.397729  832572 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 14:39:54.401766  832572 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 14:39:54.419163  832572 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 14:39:54.654128  832572 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 14:39:54.784521  832572 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 14:39:54.785034  832572 kubeadm.go:322] 
	I1114 14:39:54.785148  832572 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 14:39:54.785173  832572 kubeadm.go:322] 
	I1114 14:39:54.785269  832572 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 14:39:54.785281  832572 kubeadm.go:322] 
	I1114 14:39:54.785304  832572 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 14:39:54.785368  832572 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 14:39:54.785435  832572 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 14:39:54.785446  832572 kubeadm.go:322] 
	I1114 14:39:54.785522  832572 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 14:39:54.785531  832572 kubeadm.go:322] 
	I1114 14:39:54.785589  832572 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 14:39:54.785595  832572 kubeadm.go:322] 
	I1114 14:39:54.785669  832572 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 14:39:54.785768  832572 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 14:39:54.785864  832572 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 14:39:54.785871  832572 kubeadm.go:322] 
	I1114 14:39:54.786015  832572 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 14:39:54.786130  832572 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 14:39:54.786154  832572 kubeadm.go:322] 
	I1114 14:39:54.786250  832572 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tt6miv.sn7glg7rnalzqd3u \
	I1114 14:39:54.786378  832572 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 14:39:54.786418  832572 kubeadm.go:322] 	--control-plane 
	I1114 14:39:54.786433  832572 kubeadm.go:322] 
	I1114 14:39:54.786552  832572 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 14:39:54.786560  832572 kubeadm.go:322] 
	I1114 14:39:54.786665  832572 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tt6miv.sn7glg7rnalzqd3u \
	I1114 14:39:54.786804  832572 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 14:39:54.787006  832572 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 14:39:54.787039  832572 cni.go:84] Creating CNI manager for ""
	I1114 14:39:54.787050  832572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:39:54.788842  832572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 14:39:54.790299  832572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 14:39:54.808066  832572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 14:39:54.836187  832572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 14:39:54.836317  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:54.836335  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=addons-317784 minikube.k8s.io/updated_at=2023_11_14T14_39_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:54.894440  832572 ops.go:34] apiserver oom_adj: -16
	I1114 14:39:55.067822  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:55.163043  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:55.752440  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:56.252146  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:56.752263  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:57.252455  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:57.752592  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:58.252570  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:58.751909  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:59.252307  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:59.752588  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:00.252065  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:00.752424  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:01.252538  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:01.751755  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:02.252355  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:02.752426  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:03.252165  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:03.752223  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:04.252703  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:04.752842  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:05.252282  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:05.752368  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:06.252115  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:06.752281  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:07.252672  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:07.752574  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:07.863962  832572 kubeadm.go:1081] duration metric: took 13.027720147s to wait for elevateKubeSystemPrivileges.
	I1114 14:40:07.864030  832572 kubeadm.go:406] StartCluster complete in 25.88723075s
	I1114 14:40:07.864057  832572 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:40:07.864198  832572 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:40:07.864596  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:40:07.864892  832572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 14:40:07.864898  832572 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1114 14:40:07.864989  832572 addons.go:69] Setting helm-tiller=true in profile "addons-317784"
	I1114 14:40:07.865000  832572 addons.go:69] Setting ingress=true in profile "addons-317784"
	I1114 14:40:07.865002  832572 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-317784"
	I1114 14:40:07.865020  832572 addons.go:231] Setting addon helm-tiller=true in "addons-317784"
	I1114 14:40:07.865021  832572 addons.go:69] Setting default-storageclass=true in profile "addons-317784"
	I1114 14:40:07.865047  832572 addons.go:69] Setting metrics-server=true in profile "addons-317784"
	I1114 14:40:07.865062  832572 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-317784"
	I1114 14:40:07.865060  832572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-317784"
	I1114 14:40:07.865070  832572 addons.go:231] Setting addon metrics-server=true in "addons-317784"
	I1114 14:40:07.865091  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865106  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865112  832572 addons.go:69] Setting cloud-spanner=true in profile "addons-317784"
	I1114 14:40:07.865123  832572 addons.go:231] Setting addon cloud-spanner=true in "addons-317784"
	I1114 14:40:07.865131  832572 config.go:182] Loaded profile config "addons-317784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:40:07.865147  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865032  832572 addons.go:231] Setting addon ingress=true in "addons-317784"
	I1114 14:40:07.865106  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865221  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865506  832572 addons.go:69] Setting gcp-auth=true in profile "addons-317784"
	I1114 14:40:07.865534  832572 mustload.go:65] Loading cluster: addons-317784
	I1114 14:40:07.865548  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865557  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865578  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865583  832572 addons.go:69] Setting registry=true in profile "addons-317784"
	I1114 14:40:07.864990  832572 addons.go:69] Setting volumesnapshots=true in profile "addons-317784"
	I1114 14:40:07.865594  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865596  832572 addons.go:231] Setting addon registry=true in "addons-317784"
	I1114 14:40:07.865600  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865604  832572 addons.go:231] Setting addon volumesnapshots=true in "addons-317784"
	I1114 14:40:07.865611  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865629  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865037  832572 addons.go:69] Setting ingress-dns=true in profile "addons-317784"
	I1114 14:40:07.865643  832572 addons.go:231] Setting addon ingress-dns=true in "addons-317784"
	I1114 14:40:07.865644  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865042  832572 addons.go:69] Setting inspektor-gadget=true in profile "addons-317784"
	I1114 14:40:07.865661  832572 addons.go:231] Setting addon inspektor-gadget=true in "addons-317784"
	I1114 14:40:07.865663  832572 addons.go:69] Setting storage-provisioner=true in profile "addons-317784"
	I1114 14:40:07.865671  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865675  832572 addons.go:231] Setting addon storage-provisioner=true in "addons-317784"
	I1114 14:40:07.865714  832572 config.go:182] Loaded profile config "addons-317784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:40:07.865723  832572 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-317784"
	I1114 14:40:07.865735  832572 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-317784"
	I1114 14:40:07.865956  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865963  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865986  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865984  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865990  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866016  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865579  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866041  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865630  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866050  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866075  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865718  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.866107  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.866294  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.866394  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866394  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866408  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866411  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866418  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866430  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866044  832572 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-317784"
	I1114 14:40:07.866491  832572 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-317784"
	I1114 14:40:07.866535  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.866759  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866824  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866899  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866941  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.885791  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I1114 14:40:07.885807  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
	I1114 14:40:07.885788  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I1114 14:40:07.885944  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I1114 14:40:07.886311  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.886433  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.886507  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.886575  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.886915  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.886933  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.887056  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.887067  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.887078  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.887081  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.887270  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.887289  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.887505  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.887529  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.887549  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.887735  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.888051  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.888089  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.888105  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.888141  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.888542  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.892855  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.894818  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44257
	I1114 14:40:07.895350  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I1114 14:40:07.901063  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39965
	I1114 14:40:07.901299  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.901353  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.901651  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.901776  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.901895  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.901946  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.902241  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.902259  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.902332  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.902348  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.902686  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.902980  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.903198  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.903331  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.903345  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.904108  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.905150  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.905189  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.906493  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.907635  832572 addons.go:231] Setting addon default-storageclass=true in "addons-317784"
	I1114 14:40:07.907685  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.908100  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.908133  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.908710  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.908751  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.921170  832572 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-317784" context rescaled to 1 replicas
	I1114 14:40:07.921225  832572 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 14:40:07.923197  832572 out.go:177] * Verifying Kubernetes components...
	I1114 14:40:07.924836  832572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:40:07.936443  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I1114 14:40:07.937199  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.937939  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.937966  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.938418  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.939046  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.939096  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.939313  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33165
	I1114 14:40:07.939446  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I1114 14:40:07.940021  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.940720  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.940754  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.940818  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I1114 14:40:07.941172  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I1114 14:40:07.941351  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.941423  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.941599  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I1114 14:40:07.941765  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.941831  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.942273  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.942291  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.942352  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I1114 14:40:07.942498  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.943089  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.943252  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I1114 14:40:07.943790  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.943827  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.944084  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.944192  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.944337  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.944350  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.944433  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43743
	I1114 14:40:07.944516  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.944574  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.944596  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.944903  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.944921  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.946676  832572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 14:40:07.945337  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.945388  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.945413  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.945486  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.946075  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.946200  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.946882  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I1114 14:40:07.947518  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I1114 14:40:07.948360  832572 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:40:07.948374  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 14:40:07.948399  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.948455  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.948524  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.948582  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.948808  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.949414  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.949448  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.949474  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.949491  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.949812  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.949831  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.950152  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.950329  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.950361  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.950574  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.950629  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.950734  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.950926  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.951336  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.951348  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.951356  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.951393  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.951412  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.951704  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.951795  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.951825  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.952243  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.952350  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.952385  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.952504  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.952526  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.952678  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1114 14:40:07.952870  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.952903  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.953060  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.953149  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.953228  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.953670  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.954214  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.954228  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.954507  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.954635  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.954695  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.956704  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1114 14:40:07.955311  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.956394  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.958153  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1114 14:40:07.958167  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1114 14:40:07.958186  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.960527  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1114 14:40:07.959410  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.963348  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1114 14:40:07.962360  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.962373  832572 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1114 14:40:07.963097  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.966118  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1114 14:40:07.967008  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I1114 14:40:07.968953  832572 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1114 14:40:07.968982  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1114 14:40:07.969005  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.964782  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.969073  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.964983  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.964728  832572 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1114 14:40:07.969641  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I1114 14:40:07.967467  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1114 14:40:07.968180  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.967095  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I1114 14:40:07.969944  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.970921  832572 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 14:40:07.970482  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I1114 14:40:07.971230  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.971568  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.971857  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.971874  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.972854  832572 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 14:40:07.972878  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.973481  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.975428  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1114 14:40:07.973523  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.974296  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.974367  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.974940  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.974992  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.975050  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.977335  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.977416  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.978213  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.978555  832572 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 14:40:07.978578  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.978649  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.978812  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.979637  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1114 14:40:07.979780  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.979803  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.979845  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1114 14:40:07.980005  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.980037  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.980322  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.981211  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I1114 14:40:07.982183  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1114 14:40:07.983652  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1114 14:40:07.982226  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.981951  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I1114 14:40:07.982841  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.985124  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1114 14:40:07.985145  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1114 14:40:07.985164  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.982872  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.982972  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.983272  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.984268  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.984292  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.984311  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.984345  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36135
	I1114 14:40:07.985271  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.987233  832572 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1114 14:40:07.985924  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.986482  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.986537  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.987670  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.987685  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.988346  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.988810  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.988828  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.988855  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.988880  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.988969  832572 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1114 14:40:07.988986  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1114 14:40:07.989005  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.989621  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.991206  832572 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1114 14:40:07.991235  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.992814  832572 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1114 14:40:07.992828  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1114 14:40:07.990091  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.992831  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.990120  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.992846  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.992853  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.990536  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.992877  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.990546  832572 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-317784"
	I1114 14:40:07.992901  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.992929  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.990927  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.989686  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.993240  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.993284  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.993317  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.993354  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.993375  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.993573  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.993624  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.993664  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.994242  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.994749  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.996527  832572 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1114 14:40:07.995445  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.995627  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.995756  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.997081  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.997226  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.997857  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.997880  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.997974  832572 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1114 14:40:07.997984  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1114 14:40:07.997999  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.998046  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.999876  832572 out.go:177]   - Using image docker.io/registry:2.8.3
	I1114 14:40:07.998861  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.999091  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.999611  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:08.001195  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.002802  832572 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1114 14:40:08.001522  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.001534  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.001768  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.001899  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.004468  832572 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1114 14:40:08.004553  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.004669  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.006221  832572 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1114 14:40:08.007741  832572 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1114 14:40:08.007759  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1114 14:40:08.007779  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.006331  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I1114 14:40:08.006399  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.006416  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1114 14:40:08.007932  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.006608  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.006634  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.006937  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I1114 14:40:08.008463  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.008776  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:08.008885  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.009461  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:08.009624  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:08.009653  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:08.010028  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:08.010048  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:08.010114  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:08.010373  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:08.010810  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:08.011016  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:08.012657  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.013189  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:08.013277  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.013310  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:08.013365  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.013530  832572 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 14:40:08.015115  832572 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1114 14:40:08.013543  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 14:40:08.013574  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.014050  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.014745  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.016582  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.016638  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.016668  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.016686  832572 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 14:40:08.016695  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 14:40:08.016708  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.016836  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.016904  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.017119  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.017478  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.017646  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.019303  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44135
	I1114 14:40:08.019747  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:08.020221  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:08.020241  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:08.020807  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.020849  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:08.021180  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.021383  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:08.021422  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:08.021504  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.021649  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.021677  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.021704  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.021716  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.021915  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.021956  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.022101  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.022131  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.022239  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.022292  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.022378  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.022394  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.036758  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I1114 14:40:08.037201  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:08.037696  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:08.037717  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:08.038075  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:08.038286  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:08.040033  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:08.042121  832572 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1114 14:40:08.043619  832572 out.go:177]   - Using image docker.io/busybox:stable
	I1114 14:40:08.045193  832572 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1114 14:40:08.045211  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1114 14:40:08.045228  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.048641  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.049281  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.049407  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.049408  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.049691  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.049904  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.050112  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	W1114 14:40:08.051312  832572 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54622->192.168.39.16:22: read: connection reset by peer
	I1114 14:40:08.051343  832572 retry.go:31] will retry after 362.443206ms: ssh: handshake failed: read tcp 192.168.39.1:54622->192.168.39.16:22: read: connection reset by peer
	I1114 14:40:08.207409  832572 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1114 14:40:08.207432  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1114 14:40:08.219970  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:40:08.236680  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1114 14:40:08.250193  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1114 14:40:08.250228  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1114 14:40:08.254790  832572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 14:40:08.255454  832572 node_ready.go:35] waiting up to 6m0s for node "addons-317784" to be "Ready" ...
	I1114 14:40:08.309093  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1114 14:40:08.323090  832572 node_ready.go:49] node "addons-317784" has status "Ready":"True"
	I1114 14:40:08.323130  832572 node_ready.go:38] duration metric: took 67.638062ms waiting for node "addons-317784" to be "Ready" ...
	I1114 14:40:08.323145  832572 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:40:08.336832  832572 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1114 14:40:08.336864  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1114 14:40:08.337120  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 14:40:08.344269  832572 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:08.356781  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1114 14:40:08.368608  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 14:40:08.381574  832572 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1114 14:40:08.381602  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1114 14:40:08.386287  832572 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1114 14:40:08.386315  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1114 14:40:08.400478  832572 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1114 14:40:08.400499  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1114 14:40:08.449578  832572 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 14:40:08.449610  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1114 14:40:08.450787  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1114 14:40:08.450811  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1114 14:40:08.460580  832572 pod_ready.go:92] pod "etcd-addons-317784" in "kube-system" namespace has status "Ready":"True"
	I1114 14:40:08.460606  832572 pod_ready.go:81] duration metric: took 116.311095ms waiting for pod "etcd-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:08.460622  832572 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:08.514149  832572 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1114 14:40:08.514194  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1114 14:40:08.578865  832572 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1114 14:40:08.578892  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1114 14:40:08.592836  832572 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1114 14:40:08.592865  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1114 14:40:08.611006  832572 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1114 14:40:08.611039  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1114 14:40:08.636195  832572 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 14:40:08.636226  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 14:40:08.640801  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1114 14:40:08.640830  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1114 14:40:08.712210  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1114 14:40:08.770436  832572 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1114 14:40:08.770468  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1114 14:40:08.855705  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1114 14:40:08.870150  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1114 14:40:08.870197  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1114 14:40:08.884490  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1114 14:40:08.884522  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1114 14:40:08.888761  832572 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 14:40:08.888787  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 14:40:08.900243  832572 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1114 14:40:08.900265  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1114 14:40:08.910542  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1114 14:40:08.972222  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1114 14:40:08.972258  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1114 14:40:09.010923  832572 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 14:40:09.010954  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1114 14:40:09.040630  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 14:40:09.057362  832572 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1114 14:40:09.057392  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1114 14:40:09.073246  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1114 14:40:09.073270  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1114 14:40:09.123550  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 14:40:09.139397  832572 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1114 14:40:09.139447  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1114 14:40:09.145153  832572 pod_ready.go:92] pod "kube-apiserver-addons-317784" in "kube-system" namespace has status "Ready":"True"
	I1114 14:40:09.145181  832572 pod_ready.go:81] duration metric: took 684.546143ms waiting for pod "kube-apiserver-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.145197  832572 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.166443  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1114 14:40:09.166478  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1114 14:40:09.236378  832572 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1114 14:40:09.236410  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1114 14:40:09.270330  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1114 14:40:09.270362  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1114 14:40:09.304405  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1114 14:40:09.354255  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1114 14:40:09.354285  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1114 14:40:09.403173  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1114 14:40:09.403210  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1114 14:40:09.439331  832572 pod_ready.go:92] pod "kube-controller-manager-addons-317784" in "kube-system" namespace has status "Ready":"True"
	I1114 14:40:09.439369  832572 pod_ready.go:81] duration metric: took 294.162071ms waiting for pod "kube-controller-manager-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.439384  832572 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.451154  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1114 14:40:09.611012  832572 pod_ready.go:92] pod "kube-scheduler-addons-317784" in "kube-system" namespace has status "Ready":"True"
	I1114 14:40:09.611038  832572 pod_ready.go:81] duration metric: took 171.64584ms waiting for pod "kube-scheduler-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.611047  832572 pod_ready.go:38] duration metric: took 1.287888103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:40:09.611064  832572 api_server.go:52] waiting for apiserver process to appear ...
	I1114 14:40:09.611141  832572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:40:15.450188  832572 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1114 14:40:15.450247  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:15.453819  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:15.454375  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:15.454411  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:15.454550  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:15.454778  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:15.455013  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:15.455201  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:15.682704  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.445983471s)
	I1114 14:40:15.682758  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.462751331s)
	I1114 14:40:15.682802  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.682827  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.682851  832572 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.428024683s)
	I1114 14:40:15.682886  832572 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 14:40:15.682764  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.682932  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.682934  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.373806413s)
	I1114 14:40:15.682991  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.683008  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.683287  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.683303  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.683313  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.683321  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.683746  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.683766  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.683765  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.683778  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.683775  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:15.683782  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.683787  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.683796  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.683803  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.686148  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:15.686171  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.686190  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.686194  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:15.686203  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.686220  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.686226  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.686234  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.686174  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:15.855516  832572 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1114 14:40:15.906684  832572 addons.go:231] Setting addon gcp-auth=true in "addons-317784"
	I1114 14:40:15.906766  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:15.907266  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:15.907316  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:15.923755  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I1114 14:40:15.924285  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:15.924799  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:15.924827  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:15.925263  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:15.925735  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:15.925768  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:15.940272  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35045
	I1114 14:40:15.940727  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:15.941248  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:15.941263  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:15.941634  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:15.941800  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:15.943387  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:15.943647  832572 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1114 14:40:15.943672  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:15.946785  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:15.947258  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:15.947280  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:15.947459  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:15.947695  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:15.947862  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:15.948046  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:17.113431  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.776257149s)
	I1114 14:40:17.113473  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.744838651s)
	I1114 14:40:17.113511  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.113522  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.113530  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.113540  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.113431  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.756602225s)
	I1114 14:40:17.113593  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.113645  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.113667  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.401419823s)
	I1114 14:40:17.113697  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.113716  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.113769  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.258016647s)
	I1114 14:40:17.113804  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.20323607s)
	I1114 14:40:17.114069  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.114085  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.114104  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.114088  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.114171  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.073512184s)
	I1114 14:40:17.114202  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.114213  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.114296  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.990686239s)
	W1114 14:40:17.114341  832572 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1114 14:40:17.114366  832572 retry.go:31] will retry after 311.125362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1114 14:40:17.114447  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.810005785s)
	I1114 14:40:17.114466  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.114476  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115048  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115070  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115080  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115091  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115102  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115107  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115115  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115124  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115126  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115129  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115138  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115147  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115158  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115167  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115176  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115185  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115187  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115195  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115206  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115218  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115234  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115243  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115252  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115264  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115272  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115281  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115288  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115301  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115310  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115319  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115327  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.116491  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.116508  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.116520  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.116529  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.116697  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.116725  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.116733  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.116990  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.117022  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.117032  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.117091  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.117124  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.117132  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.117141  832572 addons.go:467] Verifying addon metrics-server=true in "addons-317784"
	I1114 14:40:17.117489  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.117514  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.117542  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.117551  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115177  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.117791  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.117856  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.118013  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.118024  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.118082  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.118092  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.118100  832572 addons.go:467] Verifying addon registry=true in "addons-317784"
	I1114 14:40:17.120042  832572 out.go:177] * Verifying registry addon...
	I1114 14:40:17.120046  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.120062  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.121681  832572 addons.go:467] Verifying addon ingress=true in "addons-317784"
	I1114 14:40:17.118623  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.123235  832572 out.go:177] * Verifying ingress addon...
	I1114 14:40:17.121763  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.120176  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.118595  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.122456  832572 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1114 14:40:17.126205  832572 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1114 14:40:17.154370  832572 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1114 14:40:17.154399  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:17.154729  832572 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1114 14:40:17.154756  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:17.171341  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.171364  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.171662  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.171679  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	W1114 14:40:17.171773  832572 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1114 14:40:17.182753  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.182777  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.183058  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.183078  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.192454  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:17.192548  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:17.426230  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 14:40:17.716628  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.265400607s)
	I1114 14:40:17.716652  832572 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.105483581s)
	I1114 14:40:17.716686  832572 api_server.go:72] duration metric: took 9.795386004s to wait for apiserver process to appear ...
	I1114 14:40:17.716694  832572 api_server.go:88] waiting for apiserver healthz status ...
	I1114 14:40:17.716698  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.716714  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.716716  832572 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I1114 14:40:17.716802  832572 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.773131259s)
	I1114 14:40:17.718710  832572 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 14:40:17.717139  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.717168  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.719959  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.721145  832572 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1114 14:40:17.719986  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.722422  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.722510  832572 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1114 14:40:17.722534  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1114 14:40:17.722692  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.722713  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.722733  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.722755  832572 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-317784"
	I1114 14:40:17.724210  832572 out.go:177] * Verifying csi-hostpath-driver addon...
	I1114 14:40:17.726496  832572 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1114 14:40:17.799332  832572 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I1114 14:40:17.801243  832572 api_server.go:141] control plane version: v1.28.3
	I1114 14:40:17.801278  832572 api_server.go:131] duration metric: took 84.576298ms to wait for apiserver health ...
	I1114 14:40:17.801291  832572 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 14:40:17.869942  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:17.892833  832572 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1114 14:40:17.892866  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1114 14:40:17.928603  832572 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1114 14:40:17.928636  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1114 14:40:17.956617  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:17.975923  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1114 14:40:18.011537  832572 system_pods.go:59] 18 kube-system pods found
	I1114 14:40:18.011588  832572 system_pods.go:61] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:18.011598  832572 system_pods.go:61] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending
	I1114 14:40:18.011607  832572 system_pods.go:61] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending
	I1114 14:40:18.011613  832572 system_pods.go:61] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending
	I1114 14:40:18.011621  832572 system_pods.go:61] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:18.011628  832572 system_pods.go:61] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:18.011636  832572 system_pods.go:61] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:18.011647  832572 system_pods.go:61] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:18.011664  832572 system_pods.go:61] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:18.011679  832572 system_pods.go:61] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:18.011695  832572 system_pods.go:61] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:18.011709  832572 system_pods.go:61] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:18.011721  832572 system_pods.go:61] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:18.011734  832572 system_pods.go:61] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:18.011748  832572 system_pods.go:61] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.011763  832572 system_pods.go:61] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.011776  832572 system_pods.go:61] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:18.011789  832572 system_pods.go:61] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:18.011803  832572 system_pods.go:74] duration metric: took 210.503856ms to wait for pod list to return data ...
	I1114 14:40:18.011819  832572 default_sa.go:34] waiting for default service account to be created ...
	I1114 14:40:18.043289  832572 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1114 14:40:18.043314  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:18.058440  832572 default_sa.go:45] found service account: "default"
	I1114 14:40:18.058468  832572 default_sa.go:55] duration metric: took 46.637967ms for default service account to be created ...
	I1114 14:40:18.058478  832572 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 14:40:18.144281  832572 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1114 14:40:18.144320  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:18.181405  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:18.181437  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:18.181446  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:18.181457  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending
	I1114 14:40:18.181463  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:18.181470  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:18.181478  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:18.181482  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:18.181488  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:18.181498  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:18.181506  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:18.181542  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:18.181562  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:18.181575  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:18.181585  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:18.181597  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.181615  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.181631  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:18.181644  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:18.181667  832572 retry.go:31] will retry after 245.404881ms: missing components: kube-proxy
	I1114 14:40:18.208426  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:18.208568  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:18.452278  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:18.452315  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:18.452323  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:18.452331  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:18.452344  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:18.452352  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:18.452357  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:18.452361  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:18.452369  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:18.452374  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:18.452378  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:18.452385  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:18.452393  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:18.452402  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:18.452408  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:18.452416  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.452423  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.452431  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:18.452439  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:18.452456  832572 retry.go:31] will retry after 244.568454ms: missing components: kube-proxy
	I1114 14:40:18.720508  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:18.734144  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:18.764367  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:18.811250  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:18.811299  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:18.811314  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:18.811328  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:18.811340  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:18.811360  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:18.811369  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:18.811381  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:18.811397  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:18.811411  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:18.811425  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:18.811440  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:18.811455  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:18.811465  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:18.811479  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:18.811493  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.811510  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.811524  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:18.811537  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:18.811564  832572 retry.go:31] will retry after 461.869894ms: missing components: kube-proxy
	I1114 14:40:19.177523  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:19.239442  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:19.261472  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:19.286302  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:19.286338  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:19.286348  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:19.286356  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:19.286363  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:19.286368  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:19.286373  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:19.286378  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:19.286385  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:19.286392  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:19.286399  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:19.286405  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:19.286412  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:19.286420  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:19.286428  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:19.286435  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:19.286444  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:19.286449  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:19.286455  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:19.286471  832572 retry.go:31] will retry after 592.745152ms: missing components: kube-proxy
	I1114 14:40:19.650621  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:19.696916  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:19.700046  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:19.899301  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:19.899339  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:19.899348  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:19.899357  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:19.899363  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:19.899368  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:19.899380  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:19.899386  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:19.899394  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:19.899402  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:19.899414  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:19.899423  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:19.899451  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:19.899458  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:19.899467  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:19.899475  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:19.899485  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:19.899492  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:19.899502  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:19.899525  832572 retry.go:31] will retry after 743.897155ms: missing components: kube-proxy
	I1114 14:40:20.169340  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:20.249633  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:20.251330  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:20.479080  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.05279354s)
	I1114 14:40:20.479134  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.503177148s)
	I1114 14:40:20.479147  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:20.479164  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:20.479176  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:20.479193  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:20.479532  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:20.479550  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:20.479566  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:20.479574  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:20.479774  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:20.479802  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:20.479939  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:20.479950  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:20.480026  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:20.480041  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:20.480054  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:20.480063  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:20.480839  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:20.480857  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:20.480839  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:20.482919  832572 addons.go:467] Verifying addon gcp-auth=true in "addons-317784"
	I1114 14:40:20.484729  832572 out.go:177] * Verifying gcp-auth addon...
	I1114 14:40:20.487111  832572 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1114 14:40:20.491132  832572 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1114 14:40:20.491147  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:20.494794  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:20.657446  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:20.657481  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:20.657489  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:20.657498  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:20.657506  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:20.657512  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:20.657516  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:20.657521  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:20.657528  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:20.657534  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:20.657543  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:20.657554  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:20.657564  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:20.657572  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:20.657581  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:20.657588  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:20.657598  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:20.657607  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:20.657616  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:20.657631  832572 retry.go:31] will retry after 593.375754ms: missing components: kube-proxy
	I1114 14:40:20.661445  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:20.703175  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:20.705934  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:20.999265  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:21.153102  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:21.201920  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:21.203388  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:21.263889  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:21.263924  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:21.263934  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:21.263945  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:21.263954  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:21.263959  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:21.263964  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:21.263968  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:21.263974  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:21.263979  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:21.263984  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:21.263992  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:21.263999  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:21.264008  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:21.264014  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:21.264021  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:21.264028  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:21.264033  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:21.264043  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:21.264064  832572 retry.go:31] will retry after 1.176167498s: missing components: kube-proxy
	I1114 14:40:21.499055  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:21.657025  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:21.698437  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:21.702862  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:22.003216  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:22.151843  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:22.203675  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:22.203833  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:22.450065  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:22.450101  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:22.450110  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:22.450119  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:22.450129  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:22.450134  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:22.450139  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:22.450145  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:22.450151  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:22.450158  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:22.450163  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:22.450169  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:22.450178  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:22.450184  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:22.450193  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:22.450200  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:22.450210  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:22.450217  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:22.450226  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:22.450258  832572 retry.go:31] will retry after 1.018281819s: missing components: kube-proxy
	I1114 14:40:22.500492  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:22.650620  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:22.697588  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:22.699250  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:23.004254  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:23.150899  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:23.199121  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:23.199700  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:23.478567  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:23.478608  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:23.478620  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:23.478635  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:23.478642  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:23.478647  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:23.478652  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:23.478656  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:23.478668  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:23.478677  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:23.478682  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:23.478688  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:23.478694  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:23.478700  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:23.478707  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:23.478716  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:23.478725  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:23.478732  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:23.478737  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:23.478754  832572 retry.go:31] will retry after 1.491059492s: missing components: kube-proxy
	I1114 14:40:23.499123  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:23.650221  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:23.698660  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:23.698706  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:24.002974  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:24.153501  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:24.198679  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:24.199307  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:24.501020  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:24.652363  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:24.707731  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:24.717333  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:24.985390  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:24.985426  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:24.985434  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:24.985493  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:24.985499  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:24.985505  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:24.985509  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:24.985514  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:24.985523  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:24.985529  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Running
	I1114 14:40:24.985534  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:24.985540  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:24.985547  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:24.985557  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:24.985565  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:24.985577  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:24.985583  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:24.985589  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:24.985594  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:24.985600  832572 system_pods.go:126] duration metric: took 6.9271181s to wait for k8s-apps to be running ...
	I1114 14:40:24.985608  832572 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:40:24.985657  832572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:40:25.029221  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:25.050212  832572 system_svc.go:56] duration metric: took 64.588954ms WaitForService to wait for kubelet.
	I1114 14:40:25.050245  832572 kubeadm.go:581] duration metric: took 17.128945338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:40:25.050267  832572 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:40:25.057185  832572 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:40:25.057219  832572 node_conditions.go:123] node cpu capacity is 2
	I1114 14:40:25.057231  832572 node_conditions.go:105] duration metric: took 6.960226ms to run NodePressure ...
	I1114 14:40:25.057244  832572 start.go:228] waiting for startup goroutines ...
	I1114 14:40:25.159787  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:25.206036  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:25.207160  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:25.499307  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:25.652057  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:25.702844  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:25.707702  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:26.029985  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:26.161062  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:26.204531  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:26.205375  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:26.513078  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:26.656857  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:26.707604  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:26.707894  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:27.001636  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:27.155381  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:27.202593  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:27.202925  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:27.507633  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:27.659898  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:27.698822  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:27.699542  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:27.998896  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:28.152507  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:28.208411  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:28.210696  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:28.502760  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:28.660164  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:28.702749  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:28.704650  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:29.004038  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:29.169789  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:29.209226  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:29.209794  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:29.498710  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:29.650606  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:29.699769  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:29.700035  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:30.002817  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:30.151663  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:30.198769  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:30.199476  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:30.499172  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:30.651907  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:30.700674  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:30.701099  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:31.001241  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:31.151326  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:31.200565  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:31.200923  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:31.500232  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:31.650373  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:31.696942  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:31.701820  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:31.999672  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:32.149624  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:32.196993  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:32.198243  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:32.498728  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:32.656845  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:32.699693  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:32.700664  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:33.003516  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:33.152629  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:33.203063  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:33.203734  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:33.507431  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:33.658547  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:33.701275  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:33.702140  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:33.998744  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:34.150724  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:34.199240  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:34.200932  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:34.502459  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:34.654916  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:34.699239  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:34.701021  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:35.000186  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:35.158158  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:35.198937  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:35.200233  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:35.499522  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:35.652097  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:35.699624  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:35.700043  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:35.998954  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:36.150237  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:36.199401  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:36.200214  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:36.499137  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:36.651645  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:36.699338  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:36.699619  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:36.999613  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:37.150852  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:37.198857  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:37.199900  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:37.499110  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:37.666060  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:37.699187  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:37.700835  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:37.999740  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:38.153046  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:38.199370  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:38.199502  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:38.500502  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:38.650703  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:38.698267  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:38.699273  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:38.998806  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:39.151367  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:39.198304  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:39.198878  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:39.499554  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:39.656261  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:39.700078  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:39.700418  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:40.000821  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:40.151182  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:40.198527  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:40.199293  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:40.498633  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:40.650032  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:40.699302  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:40.703633  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:40.999414  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:41.154140  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:41.201140  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:41.201411  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:41.502086  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:41.651069  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:41.699219  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:41.701050  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:41.999424  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:42.152676  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:42.197558  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:42.198221  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:42.498954  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:42.690073  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:42.715702  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:42.719942  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:43.012944  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:43.173249  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:43.209800  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:43.214157  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:43.499569  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:43.656672  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:43.698191  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:43.699740  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:44.010559  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:44.151152  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:44.202112  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:44.202500  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:44.499132  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:44.650774  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:44.697651  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:44.700849  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:45.005585  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:45.150511  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:45.196936  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:45.199331  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:45.499529  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:45.652181  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:45.698352  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:45.698459  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:45.999480  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:46.150788  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:46.198444  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:46.198454  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:46.504080  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:46.667659  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:46.699590  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:46.699987  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:47.000009  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:47.152012  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:47.198214  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:47.198250  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:47.498752  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:47.651262  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:47.699880  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:47.701628  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:48.000298  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:48.151167  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:48.198526  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:48.200112  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:48.499187  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:48.651179  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:48.698615  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:48.699928  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:49.004809  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:49.156038  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:49.199853  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:49.201445  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:49.500837  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:49.650623  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:49.701139  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:49.701279  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:50.018902  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:50.150121  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:50.198970  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:50.200210  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:50.501865  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:50.650285  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:50.702798  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:50.702966  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:50.999331  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:51.151582  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:51.198492  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:51.198726  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:51.498757  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:51.650441  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:51.701774  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:51.706071  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:52.000210  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:52.150758  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:52.198049  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:52.198143  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:52.499122  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:52.651193  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:52.699132  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:52.700469  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:52.999517  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:53.150478  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:53.203283  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:53.203377  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:53.500714  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:53.651920  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:53.698152  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:53.700380  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:54.000013  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:54.150770  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:54.199777  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:54.204106  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:54.499498  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:54.652091  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:54.698522  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:54.698849  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:54.999800  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:55.150766  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:55.198078  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:55.198340  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:55.499172  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:55.660034  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:55.698014  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:55.699441  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:56.004499  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:56.150707  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:56.197538  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:56.199703  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:56.514398  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:56.660578  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:56.698136  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:56.698327  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:57.006366  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:57.157611  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:57.200472  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:57.202794  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:57.499109  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:57.652941  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:57.699302  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:57.699831  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:57.999138  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:58.150409  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:58.198251  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:58.198389  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:58.501099  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:58.652147  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:58.701560  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:58.702418  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:58.999226  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:59.160876  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:59.200008  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:59.201524  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:59.513057  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:59.652217  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:59.698806  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:59.699369  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:59.999463  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:00.151566  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:00.199918  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:00.200346  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:00.499119  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:00.653386  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:00.699799  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:00.704598  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:00.998567  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:01.151939  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:01.199498  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:01.200116  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:01.499914  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:01.649857  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:01.698019  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:01.699890  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:02.000001  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:02.150378  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:02.196939  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:02.198340  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:02.501233  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:02.653500  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:02.698580  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:02.698671  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:02.999810  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:03.151004  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:03.199634  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:03.200790  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:03.499376  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:03.651625  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:03.707584  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:03.708179  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:03.999773  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:04.150919  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:04.197541  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:04.198128  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:04.500465  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:04.650799  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:04.699149  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:04.700539  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:05.001767  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:05.150608  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:05.199196  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:05.201050  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:05.499484  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:05.652415  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:05.698926  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:05.700647  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:06.296223  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:06.296378  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:06.298417  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:06.311896  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:06.499021  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:06.652972  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:06.703872  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:06.705099  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:06.999264  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:07.150340  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:07.200580  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:07.200854  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:07.499589  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:07.650253  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:07.698589  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:07.699438  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:07.999856  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:08.158390  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:08.204650  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:08.221266  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:08.499636  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:08.651960  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:08.698636  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:08.701198  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:08.999789  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:09.153015  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:09.198524  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:09.200076  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:09.499922  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:09.650886  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:09.699358  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:09.699444  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:10.240041  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:10.243108  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:10.244222  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:10.249716  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:10.499525  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:10.650548  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:10.700317  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:10.700809  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:10.998803  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:11.151336  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:11.197521  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:11.198984  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:11.498851  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:11.651031  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:11.699121  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:11.699729  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:11.999551  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:12.164122  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:12.208326  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:12.208785  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:12.499034  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:12.653004  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:12.700380  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:12.701778  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:13.000210  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:13.151865  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:13.198767  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:13.200589  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:13.500217  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:13.663691  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:13.707752  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:13.708060  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:13.998873  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:14.157216  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:14.198193  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:14.198802  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:14.499241  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:14.651319  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:14.697235  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:14.698412  832572 kapi.go:107] duration metric: took 57.575954852s to wait for kubernetes.io/minikube-addons=registry ...
	I1114 14:41:15.004436  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:15.152143  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:15.197933  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:15.499228  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:15.651192  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:15.698605  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:15.999445  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:16.151227  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:16.197916  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:16.498842  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:16.650208  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:16.698166  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:16.999835  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:17.150987  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:17.202111  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:17.499494  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:17.651668  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:17.699225  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:18.156594  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:18.158649  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:18.196976  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:18.500080  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:18.653718  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:18.697389  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:19.002377  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:19.166150  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:19.201683  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:19.507269  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:19.658448  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:19.701567  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:20.000870  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:20.151129  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:20.199608  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:20.499228  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:20.651391  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:20.697659  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:20.998439  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:21.150837  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:21.197993  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:21.500288  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:21.660593  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:21.697975  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:21.999392  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:22.151746  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:22.197813  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:22.499026  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:22.654322  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:22.700725  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:22.999601  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:23.156990  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:23.197308  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:23.503478  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:23.650455  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:23.697466  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:23.999428  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:24.149779  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:24.199873  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:24.499325  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:24.651695  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:24.697973  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:24.999188  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:25.150393  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:25.197798  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:25.499029  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:25.651226  832572 kapi.go:107] duration metric: took 1m7.924725548s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1114 14:41:25.700308  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:25.999785  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:26.197715  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:26.499502  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:27.015486  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:27.016124  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:27.198437  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:27.501861  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:27.697106  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:27.999060  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:28.199587  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:28.499099  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:28.698127  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:28.999323  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:29.198354  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:29.499908  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:29.699608  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:29.999440  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:30.197280  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:30.639276  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:30.697832  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:31.001524  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:31.198514  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:31.499557  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:31.699707  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:32.000306  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:32.198524  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:32.504642  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:32.698797  832572 kapi.go:107] duration metric: took 1m15.572587961s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1114 14:41:32.999706  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:33.500527  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:33.999631  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:34.500590  832572 kapi.go:107] duration metric: took 1m14.013473792s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1114 14:41:34.502201  832572 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-317784 cluster.
	I1114 14:41:34.503736  832572 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1114 14:41:34.505337  832572 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1114 14:41:34.506837  832572 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, helm-tiller, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1114 14:41:34.508136  832572 addons.go:502] enable addons completed in 1m26.643238104s: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin metrics-server helm-tiller inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1114 14:41:34.508187  832572 start.go:233] waiting for cluster config update ...
	I1114 14:41:34.508206  832572 start.go:242] writing updated cluster config ...
	I1114 14:41:34.508516  832572 ssh_runner.go:195] Run: rm -f paused
	I1114 14:41:34.562255  832572 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 14:41:34.563794  832572 out.go:177] * Done! kubectl is now configured to use "addons-317784" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 14:39:21 UTC, ends at Tue 2023-11-14 14:41:57 UTC. --
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.066714382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699972917066696916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:493912,},InodesUsed:&UInt64Value{Value:210,},},},}" file="go-grpc-middleware/chain.go:25" id=acc2ecd6-7b15-459f-8feb-a7dcad34d810 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.067276005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9c44dff3-db92-4008-b77e-1552e50ad23f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.067334798Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9c44dff3-db92-4008-b77e-1552e50ad23f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.068167843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:035b479e8e369e67a4b85455f52058d68db026c20973c9df83cf61a2aab96a21,PodSandboxId:fe7b8b729b0d48b63126569b48fc84563e79abdda45cf53974f71c120aaa75a3,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,State:CONTAINER_EXITED,CreatedAt:1699972914682281231,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-a752c059-4770-47b4-8afa-af875685de10,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 132e822a-d359-477c-a611-a01f2a006604,},Annotations:map[string]string{io.kubernetes.container.hash: b5a652c3,io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fba60347af66f773f71091522e4ccd9b712ad17ea15366ef193487f75e31fb85,PodSandboxId:3e18cfc2a73d34e620a7825fa7010d33f02cf29a2d5d54c71f635eb6a271ab8e,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,State:CONTAINER_EXITED,CreatedAt:1699972913989616111,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55c3e919-8947-4277-aeb1-45e8f263c870,},Annotations:map[string]string{io.kubernetes.container.hash: c82b1fe8,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a240297c919b798f7ed75e51be62f4350a3a0fd1e85d0d646d812c78db09d4,PodSandboxId:5c25d3f760ba9ec455817f0c3155d269a328a9df8ac8ed4bff9de3913d6f6f31,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1699972912435922597,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-lx8bp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f98e26b0-53b8-407a-9f98-712a0310b50a,},Annotations:map[string]string{io.kubernetes.container.hash: f93dc9b6
,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2806223cfd3a51a702cceab7f77047f86069498aa59c79e9231e882f780430,PodSandboxId:fbc912217deb20d88956a4c0ee7780f80c1493be2fc96b31f4b269ed9b99a0ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699972907481534968,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1305e1fa-41d3-4ccb-9590-a5da7f844175,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8ed09ef1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0,PodSandboxId:0b4739dda66af14458e9b7d8702dad5b58ebc11da3eae89b73d1f3861f18cff3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699972893944780241,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-fr8lj,io.kub
ernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0897976d-190b-43cb-886b-5711767f4b5c,},Annotations:map[string]string{io.kubernetes.container.hash: 23c8a73e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e901921b17ee9edee81f8856467c65d4c2a156b50b834133aed70f5f4b553ff0,PodSandboxId:2a5daeba2c2273d0cd7189040c3265d68e0f0b01fcba7a6d76ecf8855db58104,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5,State:CONTAINER_RUNNING,CreatedAt:1699972891484038182,Labels:map
[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7c6974c4d8-tzwkh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3f4fe5be-a92a-4711-9615-4091dbade91d,},Annotations:map[string]string{io.kubernetes.container.hash: ae52c92f,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68fe1d59991055a8df480aa58562cbbcd45a3c5fa21e4f0f4230cccad516ec5e,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&Cont
ainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1699972883851221536,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: f5323380,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1955ef47b5a9aaf706a32ecf9f9a5a26ed07244486e8c98a5704d0d1064555,PodSandboxId:572af360256790c181233151ca68
fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1699972881802440512,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: e0c8bcb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9eda30492e5fa9258718e37c889a5888100889cc239d2f75bb07b6
96854db7a,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1699972879633619855,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: a587c952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a601e09b
1d581217a534ad0b3018dbea455230fdedf899299ad4644ebae16b,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1699972878466308762,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 244ef8cd,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d63d2948c7de44c783714f4b136ee7d1fc7493dc29a091054a8d06edb9962e,PodSandboxId:1463081c9b3aec76a481fe360b9b14e5112d8f972f5a4dbcc123ae8ed9c6f6f8,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972878356942249,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cxw9h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fc761ba-8d27-4b75-86ac-042563877790,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea11d62,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554ca1db390278ba8653219550a19585378c02537c8ca104c43f9a5897d17080,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1699972876329922098,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b65f0,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40a6b8d04682e75791643ed51be7537e9212201fef7d103dcc24e72f6278b54,PodSandboxId:d66e3fea3c10523e37788f4a3652bb1f315dc7d1394fa2da0019afce526e6879,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1699972875096866600,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-7snd4,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8326ade1-223f-41e0-97ec-47baa8cf5141,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f1d4b531,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1427ec2aeb0037721c590469d1e293f6799878744c80ce8ef2cde7f203e4918,PodSandboxId:edd550a1a7f4d5bc13c6653d7eb7ba151b7a0b9937d220f198617bd4581203cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972873538233706,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mp8tp,io.kubernetes.pod.namespace: ingress-nginx,io.kuberne
tes.pod.uid: 7bcda67e-c991-49a0-9a5a-7123473c3d67,},Annotations:map[string]string{io.kubernetes.container.hash: 560bc442,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4277a3ed9ccf549d22c8ee025bb9c5eadd8bb8c47ae5397c4fc2819e2caaf694,PodSandboxId:e78ed7cd1fd682dc45d93c9d666101c90a714653b931c068766dbf880c99e89b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1699972873429341547,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.n
ame: snapshot-controller-58dbcc7b99-7t6pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb464b0-5361-435a-888d-ae86a377888d,},Annotations:map[string]string{io.kubernetes.container.hash: bd7f5b34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28,PodSandboxId:3b11d4daeaa4f4b8430dd6c39f07cba8c0f5553f396d8d9edece87939ee805db,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1699972873239895353,Labels:map[string]string{io.kubernet
es.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kh6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19bf641-561e-4422-b35c-1732be0e252d,},Annotations:map[string]string{io.kubernetes.container.hash: 975d5bc2,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7,PodSandboxId:8066931a75328a65077b53ed36d39e1e9633d10ccbfca158327d96e410bde4b3,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d3
00281a3d147a18ef08ae6fb079d150c,State:CONTAINER_RUNNING,CreatedAt:1699972868105284049,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-frqvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e840532-ea34-4155-9e28-d372f730759d,},Annotations:map[string]string{io.kubernetes.container.hash: b18b8f4f,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97616bc7850e432cc6f0894c11e7083fadbcd653a0d50fdae0cd50f92c0119f1,PodSandboxId:0f3af2697bd63d752bee32dd79ec3f601e42f2ddb744e211ff36637cb5f07edd,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f,Annotations:
map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f,State:CONTAINER_RUNNING,CreatedAt:1699972866681639590,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5649c69bf6-9phzq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e170d39e-44a4-47f3-8d7a-c33c0ab80af7,},Annotations:map[string]string{io.kubernetes.container.hash: f4706f7a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39a2a899a9e4f0983b1cfbf7c20c25550450f8524a937e6670c0890183bac29,PodSandboxId:ae68b4615f1188fd282292e008c129f4709f2d4e6b64f557ccd4
36d2bf680e8f,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1699972860142884608,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-zdcmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ad365-92c4-44cf-86e7-a36669bf2673,},Annotations:map[string]string{io.kubernetes.container.hash: b79aaca2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1c53ddd12dad9f5798577dacab6815144c6
d4735e7f7122ccdac3c25276ddc,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1699972857802792333,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 4a5ea6a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699972855737067781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f989de2637865a8a1a67e274eb3ebec6baaa4ac0f648ba9b9e95eec8b0594a7,PodSandboxId:d618ca4c98652eb072f68a83bab6c5d6b7fc1de18f215f325c532a7f78724e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINER_RUNNING,CreatedAt:1699972854510276860,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-jkrcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb043b53-5f93-4088-8ba6-93d4d706390a,},Annotations:map[string]string{io.kubernetes.container.hash: f29ccb9d,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dde2185933daf6048c68d2486eb0168f5ec58201e38dc7d851e4c50d06601e2,PodSandboxId:d67c80f97f98a68188bf10f89d01855242bcc766fb974f76e1938ed59539cf15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1699972846595101729,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed567ba-0020-4621-bada-2a846f0f47a3,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6b5d29c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3c6d74248251c00ee108bfabc05a066f9efcc5d704dc0026d62c6908c5fc8,PodSandboxId:c8b258085197fea6cd333aba77aa14edc9a5c35b48f5b9ec693ff172ac5ff4d2,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1699972844850964604,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1487b-0aca-47f1-94c6-c98baaf75535,},Annotations:map
[string]string{io.kubernetes.container.hash: 812afc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4138b20e23b838467e8f60bb5d78ac109293c242c7c50436c48464e62f0ce7b5,PodSandboxId:d348268b754103eeb74d95d66288425c7f99e4396cd419cacf8c6623ebd53dd2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001,State:CONTAINER_RUNNING,CreatedAt:1699972842157081755,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jhtw6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: afeb4122-4e14-4945-b56a-2c9b08c47a5f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3054ce92,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3,PodSandboxId:a924fafac788f15bb085191cd6e8180c96b3a1eca23ea76905d4fbf43f224220,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1699972832712217297,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,
io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b,},Annotations:map[string]string{io.kubernetes.container.hash: 1284627f,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedA
t:1699972823393746785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653,PodSandboxId:45e1b5dfd57fb6a82633547a42908a1ca3b2260ab9800eb09a1cc5f549a01510,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699972822763333368,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5jq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bff1d5-3968-493a-b332-d360861a5698,},Annotations:map[string]string{io.kubernetes.container.hash: 49d9aeb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa69a346cb72279384c9e23ced7bfbfba1d1c3fdd1a36049f8d4cf280b38c293,PodSandboxId:d618ca4c98652eb072f68a83bab6c5d6b7fc1de18f215f325c532a7f78724e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINE
R_EXITED,CreatedAt:1699972821592950866,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-jkrcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb043b53-5f93-4088-8ba6-93d4d706390a,},Annotations:map[string]string{io.kubernetes.container.hash: f29ccb9d,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4,PodSandboxId:b64c334306ed07fbcebfa42abe0acf9bf23f241844ecce0d652f8fefb6c8f08c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha
256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699972814159004307,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-97twm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24724bed-9f9e-4ce6-b359-dd22bf06d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: cb0ddfca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f9b0cc72b7becbdf494fe2748caed70a4e53672c513c7b0ff2fe2eb2e4fb02,PodSandboxId:f35540a56b98cf09c5906b2080b4af1c8ce4a5e5465fc9a58140a6d7476bf191,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699972788074196332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b55d3601f9ab50d0fccd5e81d0057b,},Annotations:map[string]string{io.kubernetes.container.hash: bdb6ecd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf,PodSandboxId:cadfaa6eb6060368994409f96fa9fd872f0f084b9b24ea81d3bdeaa027896cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699972788124380935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 835de15b6e6cd8d1adf2d3d351772b5f,},Annotations:map[string]string{io.kubernetes.container.hash: d88cb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887,PodSandboxId:a3cb989518dfa9522097ea174fff2ad7af956bbc8d87eece8731c6958e4bb24d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1
0baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699972787960388189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b942e929c440df9df70fd6ab79e131a8,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539,PodSandboxId:f27f11921c2c3027897ee1fbd58db7f8d3029fb857c4ed25cd7d6a95747fc5d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Image
Spec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699972787637653355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 083080137e96a65385e00b26b78226ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9c44dff3-db92-4008-b77e-1552e50ad23f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.104874913Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6dd468fb-0b44-4e3c-95a3-8415cca55ead name=/runtime.v1.RuntimeService/Version
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.104930989Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6dd468fb-0b44-4e3c-95a3-8415cca55ead name=/runtime.v1.RuntimeService/Version
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.106266946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ab645f78-607f-4c0a-a735-1b0513e04396 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.107380332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699972917107363347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:493912,},InodesUsed:&UInt64Value{Value:210,},},},}" file="go-grpc-middleware/chain.go:25" id=ab645f78-607f-4c0a-a735-1b0513e04396 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.108363154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=563110bc-ff1c-4406-84e6-20475fe2f058 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.108417095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=563110bc-ff1c-4406-84e6-20475fe2f058 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.109243593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:035b479e8e369e67a4b85455f52058d68db026c20973c9df83cf61a2aab96a21,PodSandboxId:fe7b8b729b0d48b63126569b48fc84563e79abdda45cf53974f71c120aaa75a3,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,State:CONTAINER_EXITED,CreatedAt:1699972914682281231,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-a752c059-4770-47b4-8afa-af875685de10,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 132e822a-d359-477c-a611-a01f2a006604,},Annotations:map[string]string{io.kubernetes.container.hash: b5a652c3,io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fba60347af66f773f71091522e4ccd9b712ad17ea15366ef193487f75e31fb85,PodSandboxId:3e18cfc2a73d34e620a7825fa7010d33f02cf29a2d5d54c71f635eb6a271ab8e,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,State:CONTAINER_EXITED,CreatedAt:1699972913989616111,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55c3e919-8947-4277-aeb1-45e8f263c870,},Annotations:map[string]string{io.kubernetes.container.hash: c82b1fe8,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a240297c919b798f7ed75e51be62f4350a3a0fd1e85d0d646d812c78db09d4,PodSandboxId:5c25d3f760ba9ec455817f0c3155d269a328a9df8ac8ed4bff9de3913d6f6f31,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1699972912435922597,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-lx8bp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f98e26b0-53b8-407a-9f98-712a0310b50a,},Annotations:map[string]string{io.kubernetes.container.hash: f93dc9b6
,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2806223cfd3a51a702cceab7f77047f86069498aa59c79e9231e882f780430,PodSandboxId:fbc912217deb20d88956a4c0ee7780f80c1493be2fc96b31f4b269ed9b99a0ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699972907481534968,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1305e1fa-41d3-4ccb-9590-a5da7f844175,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8ed09ef1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0,PodSandboxId:0b4739dda66af14458e9b7d8702dad5b58ebc11da3eae89b73d1f3861f18cff3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699972893944780241,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-fr8lj,io.kub
ernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0897976d-190b-43cb-886b-5711767f4b5c,},Annotations:map[string]string{io.kubernetes.container.hash: 23c8a73e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e901921b17ee9edee81f8856467c65d4c2a156b50b834133aed70f5f4b553ff0,PodSandboxId:2a5daeba2c2273d0cd7189040c3265d68e0f0b01fcba7a6d76ecf8855db58104,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5,State:CONTAINER_RUNNING,CreatedAt:1699972891484038182,Labels:map
[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7c6974c4d8-tzwkh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3f4fe5be-a92a-4711-9615-4091dbade91d,},Annotations:map[string]string{io.kubernetes.container.hash: ae52c92f,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68fe1d59991055a8df480aa58562cbbcd45a3c5fa21e4f0f4230cccad516ec5e,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&Cont
ainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1699972883851221536,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: f5323380,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1955ef47b5a9aaf706a32ecf9f9a5a26ed07244486e8c98a5704d0d1064555,PodSandboxId:572af360256790c181233151ca68
fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1699972881802440512,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: e0c8bcb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9eda30492e5fa9258718e37c889a5888100889cc239d2f75bb07b6
96854db7a,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1699972879633619855,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: a587c952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a601e09b
1d581217a534ad0b3018dbea455230fdedf899299ad4644ebae16b,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1699972878466308762,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 244ef8cd,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d63d2948c7de44c783714f4b136ee7d1fc7493dc29a091054a8d06edb9962e,PodSandboxId:1463081c9b3aec76a481fe360b9b14e5112d8f972f5a4dbcc123ae8ed9c6f6f8,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972878356942249,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cxw9h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fc761ba-8d27-4b75-86ac-042563877790,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea11d62,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554ca1db390278ba8653219550a19585378c02537c8ca104c43f9a5897d17080,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1699972876329922098,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b65f0,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40a6b8d04682e75791643ed51be7537e9212201fef7d103dcc24e72f6278b54,PodSandboxId:d66e3fea3c10523e37788f4a3652bb1f315dc7d1394fa2da0019afce526e6879,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1699972875096866600,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-7snd4,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8326ade1-223f-41e0-97ec-47baa8cf5141,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f1d4b531,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1427ec2aeb0037721c590469d1e293f6799878744c80ce8ef2cde7f203e4918,PodSandboxId:edd550a1a7f4d5bc13c6653d7eb7ba151b7a0b9937d220f198617bd4581203cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972873538233706,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mp8tp,io.kubernetes.pod.namespace: ingress-nginx,io.kuberne
tes.pod.uid: 7bcda67e-c991-49a0-9a5a-7123473c3d67,},Annotations:map[string]string{io.kubernetes.container.hash: 560bc442,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4277a3ed9ccf549d22c8ee025bb9c5eadd8bb8c47ae5397c4fc2819e2caaf694,PodSandboxId:e78ed7cd1fd682dc45d93c9d666101c90a714653b931c068766dbf880c99e89b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1699972873429341547,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.n
ame: snapshot-controller-58dbcc7b99-7t6pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb464b0-5361-435a-888d-ae86a377888d,},Annotations:map[string]string{io.kubernetes.container.hash: bd7f5b34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28,PodSandboxId:3b11d4daeaa4f4b8430dd6c39f07cba8c0f5553f396d8d9edece87939ee805db,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1699972873239895353,Labels:map[string]string{io.kubernet
es.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kh6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19bf641-561e-4422-b35c-1732be0e252d,},Annotations:map[string]string{io.kubernetes.container.hash: 975d5bc2,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7,PodSandboxId:8066931a75328a65077b53ed36d39e1e9633d10ccbfca158327d96e410bde4b3,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d3
00281a3d147a18ef08ae6fb079d150c,State:CONTAINER_RUNNING,CreatedAt:1699972868105284049,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-frqvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e840532-ea34-4155-9e28-d372f730759d,},Annotations:map[string]string{io.kubernetes.container.hash: b18b8f4f,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97616bc7850e432cc6f0894c11e7083fadbcd653a0d50fdae0cd50f92c0119f1,PodSandboxId:0f3af2697bd63d752bee32dd79ec3f601e42f2ddb744e211ff36637cb5f07edd,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f,Annotations:
map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f,State:CONTAINER_RUNNING,CreatedAt:1699972866681639590,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5649c69bf6-9phzq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e170d39e-44a4-47f3-8d7a-c33c0ab80af7,},Annotations:map[string]string{io.kubernetes.container.hash: f4706f7a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39a2a899a9e4f0983b1cfbf7c20c25550450f8524a937e6670c0890183bac29,PodSandboxId:ae68b4615f1188fd282292e008c129f4709f2d4e6b64f557ccd4
36d2bf680e8f,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1699972860142884608,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-zdcmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ad365-92c4-44cf-86e7-a36669bf2673,},Annotations:map[string]string{io.kubernetes.container.hash: b79aaca2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1c53ddd12dad9f5798577dacab6815144c6
d4735e7f7122ccdac3c25276ddc,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1699972857802792333,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 4a5ea6a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699972855737067781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f989de2637865a8a1a67e274eb3ebec6baaa4ac0f648ba9b9e95eec8b0594a7,PodSandboxId:d618ca4c98652eb072f68a83bab6c5d6b7fc1de18f215f325c532a7f78724e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINER_RUNNING,CreatedAt:1699972854510276860,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-jkrcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb043b53-5f93-4088-8ba6-93d4d706390a,},Annotations:map[string]string{io.kubernetes.container.hash: f29ccb9d,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dde2185933daf6048c68d2486eb0168f5ec58201e38dc7d851e4c50d06601e2,PodSandboxId:d67c80f97f98a68188bf10f89d01855242bcc766fb974f76e1938ed59539cf15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1699972846595101729,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed567ba-0020-4621-bada-2a846f0f47a3,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6b5d29c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3c6d74248251c00ee108bfabc05a066f9efcc5d704dc0026d62c6908c5fc8,PodSandboxId:c8b258085197fea6cd333aba77aa14edc9a5c35b48f5b9ec693ff172ac5ff4d2,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1699972844850964604,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1487b-0aca-47f1-94c6-c98baaf75535,},Annotations:map
[string]string{io.kubernetes.container.hash: 812afc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4138b20e23b838467e8f60bb5d78ac109293c242c7c50436c48464e62f0ce7b5,PodSandboxId:d348268b754103eeb74d95d66288425c7f99e4396cd419cacf8c6623ebd53dd2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001,State:CONTAINER_RUNNING,CreatedAt:1699972842157081755,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jhtw6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: afeb4122-4e14-4945-b56a-2c9b08c47a5f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3054ce92,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3,PodSandboxId:a924fafac788f15bb085191cd6e8180c96b3a1eca23ea76905d4fbf43f224220,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1699972832712217297,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,
io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b,},Annotations:map[string]string{io.kubernetes.container.hash: 1284627f,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedA
t:1699972823393746785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653,PodSandboxId:45e1b5dfd57fb6a82633547a42908a1ca3b2260ab9800eb09a1cc5f549a01510,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699972822763333368,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5jq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bff1d5-3968-493a-b332-d360861a5698,},Annotations:map[string]string{io.kubernetes.container.hash: 49d9aeb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa69a346cb72279384c9e23ced7bfbfba1d1c3fdd1a36049f8d4cf280b38c293,PodSandboxId:d618ca4c98652eb072f68a83bab6c5d6b7fc1de18f215f325c532a7f78724e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINE
R_EXITED,CreatedAt:1699972821592950866,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-jkrcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb043b53-5f93-4088-8ba6-93d4d706390a,},Annotations:map[string]string{io.kubernetes.container.hash: f29ccb9d,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4,PodSandboxId:b64c334306ed07fbcebfa42abe0acf9bf23f241844ecce0d652f8fefb6c8f08c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha
256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699972814159004307,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-97twm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24724bed-9f9e-4ce6-b359-dd22bf06d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: cb0ddfca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f9b0cc72b7becbdf494fe2748caed70a4e53672c513c7b0ff2fe2eb2e4fb02,PodSandboxId:f35540a56b98cf09c5906b2080b4af1c8ce4a5e5465fc9a58140a6d7476bf191,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699972788074196332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b55d3601f9ab50d0fccd5e81d0057b,},Annotations:map[string]string{io.kubernetes.container.hash: bdb6ecd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf,PodSandboxId:cadfaa6eb6060368994409f96fa9fd872f0f084b9b24ea81d3bdeaa027896cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699972788124380935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 835de15b6e6cd8d1adf2d3d351772b5f,},Annotations:map[string]string{io.kubernetes.container.hash: d88cb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887,PodSandboxId:a3cb989518dfa9522097ea174fff2ad7af956bbc8d87eece8731c6958e4bb24d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1
0baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699972787960388189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b942e929c440df9df70fd6ab79e131a8,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539,PodSandboxId:f27f11921c2c3027897ee1fbd58db7f8d3029fb857c4ed25cd7d6a95747fc5d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Image
Spec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699972787637653355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 083080137e96a65385e00b26b78226ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=563110bc-ff1c-4406-84e6-20475fe2f058 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.152425006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5c540da4-66a9-4dd9-afc8-81bcc7370ce9 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.152481307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5c540da4-66a9-4dd9-afc8-81bcc7370ce9 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.153779266Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2e715fcb-5e8f-4018-8656-bdd3070ce123 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.155014964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699972917154998866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:493912,},InodesUsed:&UInt64Value{Value:210,},},},}" file="go-grpc-middleware/chain.go:25" id=2e715fcb-5e8f-4018-8656-bdd3070ce123 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.155707094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1ff31ca4-a527-4942-ab1d-5e68801df6a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.155819703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1ff31ca4-a527-4942-ab1d-5e68801df6a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.156820865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:035b479e8e369e67a4b85455f52058d68db026c20973c9df83cf61a2aab96a21,PodSandboxId:fe7b8b729b0d48b63126569b48fc84563e79abdda45cf53974f71c120aaa75a3,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,State:CONTAINER_EXITED,CreatedAt:1699972914682281231,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-a752c059-4770-47b4-8afa-af875685de10,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 132e822a-d359-477c-a611-a01f2a006604,},Annotations:map[string]string{io.kubernetes.container.hash: b5a652c3,io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fba60347af66f773f71091522e4ccd9b712ad17ea15366ef193487f75e31fb85,PodSandboxId:3e18cfc2a73d34e620a7825fa7010d33f02cf29a2d5d54c71f635eb6a271ab8e,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,State:CONTAINER_EXITED,CreatedAt:1699972913989616111,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55c3e919-8947-4277-aeb1-45e8f263c870,},Annotations:map[string]string{io.kubernetes.container.hash: c82b1fe8,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a240297c919b798f7ed75e51be62f4350a3a0fd1e85d0d646d812c78db09d4,PodSandboxId:5c25d3f760ba9ec455817f0c3155d269a328a9df8ac8ed4bff9de3913d6f6f31,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1699972912435922597,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-lx8bp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f98e26b0-53b8-407a-9f98-712a0310b50a,},Annotations:map[string]string{io.kubernetes.container.hash: f93dc9b6
,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2806223cfd3a51a702cceab7f77047f86069498aa59c79e9231e882f780430,PodSandboxId:fbc912217deb20d88956a4c0ee7780f80c1493be2fc96b31f4b269ed9b99a0ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699972907481534968,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1305e1fa-41d3-4ccb-9590-a5da7f844175,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8ed09ef1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0,PodSandboxId:0b4739dda66af14458e9b7d8702dad5b58ebc11da3eae89b73d1f3861f18cff3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699972893944780241,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-fr8lj,io.kub
ernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0897976d-190b-43cb-886b-5711767f4b5c,},Annotations:map[string]string{io.kubernetes.container.hash: 23c8a73e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e901921b17ee9edee81f8856467c65d4c2a156b50b834133aed70f5f4b553ff0,PodSandboxId:2a5daeba2c2273d0cd7189040c3265d68e0f0b01fcba7a6d76ecf8855db58104,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5,State:CONTAINER_RUNNING,CreatedAt:1699972891484038182,Labels:map
[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7c6974c4d8-tzwkh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3f4fe5be-a92a-4711-9615-4091dbade91d,},Annotations:map[string]string{io.kubernetes.container.hash: ae52c92f,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68fe1d59991055a8df480aa58562cbbcd45a3c5fa21e4f0f4230cccad516ec5e,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&Cont
ainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1699972883851221536,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: f5323380,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1955ef47b5a9aaf706a32ecf9f9a5a26ed07244486e8c98a5704d0d1064555,PodSandboxId:572af360256790c181233151ca68
fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1699972881802440512,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: e0c8bcb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9eda30492e5fa9258718e37c889a5888100889cc239d2f75bb07b6
96854db7a,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1699972879633619855,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: a587c952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a601e09b
1d581217a534ad0b3018dbea455230fdedf899299ad4644ebae16b,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1699972878466308762,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 244ef8cd,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d63d2948c7de44c783714f4b136ee7d1fc7493dc29a091054a8d06edb9962e,PodSandboxId:1463081c9b3aec76a481fe360b9b14e5112d8f972f5a4dbcc123ae8ed9c6f6f8,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972878356942249,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cxw9h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fc761ba-8d27-4b75-86ac-042563877790,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea11d62,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554ca1db390278ba8653219550a19585378c02537c8ca104c43f9a5897d17080,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1699972876329922098,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b65f0,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40a6b8d04682e75791643ed51be7537e9212201fef7d103dcc24e72f6278b54,PodSandboxId:d66e3fea3c10523e37788f4a3652bb1f315dc7d1394fa2da0019afce526e6879,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1699972875096866600,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-7snd4,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8326ade1-223f-41e0-97ec-47baa8cf5141,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f1d4b531,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1427ec2aeb0037721c590469d1e293f6799878744c80ce8ef2cde7f203e4918,PodSandboxId:edd550a1a7f4d5bc13c6653d7eb7ba151b7a0b9937d220f198617bd4581203cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972873538233706,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mp8tp,io.kubernetes.pod.namespace: ingress-nginx,io.kuberne
tes.pod.uid: 7bcda67e-c991-49a0-9a5a-7123473c3d67,},Annotations:map[string]string{io.kubernetes.container.hash: 560bc442,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4277a3ed9ccf549d22c8ee025bb9c5eadd8bb8c47ae5397c4fc2819e2caaf694,PodSandboxId:e78ed7cd1fd682dc45d93c9d666101c90a714653b931c068766dbf880c99e89b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1699972873429341547,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.n
ame: snapshot-controller-58dbcc7b99-7t6pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb464b0-5361-435a-888d-ae86a377888d,},Annotations:map[string]string{io.kubernetes.container.hash: bd7f5b34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28,PodSandboxId:3b11d4daeaa4f4b8430dd6c39f07cba8c0f5553f396d8d9edece87939ee805db,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1699972873239895353,Labels:map[string]string{io.kubernet
es.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kh6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19bf641-561e-4422-b35c-1732be0e252d,},Annotations:map[string]string{io.kubernetes.container.hash: 975d5bc2,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7,PodSandboxId:8066931a75328a65077b53ed36d39e1e9633d10ccbfca158327d96e410bde4b3,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d3
00281a3d147a18ef08ae6fb079d150c,State:CONTAINER_RUNNING,CreatedAt:1699972868105284049,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-frqvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e840532-ea34-4155-9e28-d372f730759d,},Annotations:map[string]string{io.kubernetes.container.hash: b18b8f4f,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97616bc7850e432cc6f0894c11e7083fadbcd653a0d50fdae0cd50f92c0119f1,PodSandboxId:0f3af2697bd63d752bee32dd79ec3f601e42f2ddb744e211ff36637cb5f07edd,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f,Annotations:
map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f,State:CONTAINER_RUNNING,CreatedAt:1699972866681639590,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5649c69bf6-9phzq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e170d39e-44a4-47f3-8d7a-c33c0ab80af7,},Annotations:map[string]string{io.kubernetes.container.hash: f4706f7a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39a2a899a9e4f0983b1cfbf7c20c25550450f8524a937e6670c0890183bac29,PodSandboxId:ae68b4615f1188fd282292e008c129f4709f2d4e6b64f557ccd4
36d2bf680e8f,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1699972860142884608,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-zdcmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ad365-92c4-44cf-86e7-a36669bf2673,},Annotations:map[string]string{io.kubernetes.container.hash: b79aaca2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1c53ddd12dad9f5798577dacab6815144c6
d4735e7f7122ccdac3c25276ddc,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1699972857802792333,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 4a5ea6a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699972855737067781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f989de2637865a8a1a67e274eb3ebec6baaa4ac0f648ba9b9e95eec8b0594a7,PodSandboxId:d618ca4c98652eb072f68a83bab6c5d6b7fc1de18f215f325c532a7f78724e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINER_RUNNING,CreatedAt:1699972854510276860,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-jkrcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb043b53-5f93-4088-8ba6-93d4d706390a,},Annotations:map[string]string{io.kubernetes.container.hash: f29ccb9d,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dde2185933daf6048c68d2486eb0168f5ec58201e38dc7d851e4c50d06601e2,PodSandboxId:d67c80f97f98a68188bf10f89d01855242bcc766fb974f76e1938ed59539cf15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1699972846595101729,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed567ba-0020-4621-bada-2a846f0f47a3,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6b5d29c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3c6d74248251c00ee108bfabc05a066f9efcc5d704dc0026d62c6908c5fc8,PodSandboxId:c8b258085197fea6cd333aba77aa14edc9a5c35b48f5b9ec693ff172ac5ff4d2,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1699972844850964604,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1487b-0aca-47f1-94c6-c98baaf75535,},Annotations:map
[string]string{io.kubernetes.container.hash: 812afc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4138b20e23b838467e8f60bb5d78ac109293c242c7c50436c48464e62f0ce7b5,PodSandboxId:d348268b754103eeb74d95d66288425c7f99e4396cd419cacf8c6623ebd53dd2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001,State:CONTAINER_RUNNING,CreatedAt:1699972842157081755,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jhtw6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: afeb4122-4e14-4945-b56a-2c9b08c47a5f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3054ce92,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3,PodSandboxId:a924fafac788f15bb085191cd6e8180c96b3a1eca23ea76905d4fbf43f224220,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1699972832712217297,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,
io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b,},Annotations:map[string]string{io.kubernetes.container.hash: 1284627f,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedA
t:1699972823393746785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653,PodSandboxId:45e1b5dfd57fb6a82633547a42908a1ca3b2260ab9800eb09a1cc5f549a01510,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699972822763333368,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5jq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bff1d5-3968-493a-b332-d360861a5698,},Annotations:map[string]string{io.kubernetes.container.hash: 49d9aeb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa69a346cb72279384c9e23ced7bfbfba1d1c3fdd1a36049f8d4cf280b38c293,PodSandboxId:d618ca4c98652eb072f68a83bab6c5d6b7fc1de18f215f325c532a7f78724e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINE
R_EXITED,CreatedAt:1699972821592950866,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-jkrcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb043b53-5f93-4088-8ba6-93d4d706390a,},Annotations:map[string]string{io.kubernetes.container.hash: f29ccb9d,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4,PodSandboxId:b64c334306ed07fbcebfa42abe0acf9bf23f241844ecce0d652f8fefb6c8f08c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha
256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699972814159004307,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-97twm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24724bed-9f9e-4ce6-b359-dd22bf06d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: cb0ddfca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f9b0cc72b7becbdf494fe2748caed70a4e53672c513c7b0ff2fe2eb2e4fb02,PodSandboxId:f35540a56b98cf09c5906b2080b4af1c8ce4a5e5465fc9a58140a6d7476bf191,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699972788074196332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b55d3601f9ab50d0fccd5e81d0057b,},Annotations:map[string]string{io.kubernetes.container.hash: bdb6ecd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf,PodSandboxId:cadfaa6eb6060368994409f96fa9fd872f0f084b9b24ea81d3bdeaa027896cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699972788124380935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 835de15b6e6cd8d1adf2d3d351772b5f,},Annotations:map[string]string{io.kubernetes.container.hash: d88cb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887,PodSandboxId:a3cb989518dfa9522097ea174fff2ad7af956bbc8d87eece8731c6958e4bb24d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1
0baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699972787960388189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b942e929c440df9df70fd6ab79e131a8,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539,PodSandboxId:f27f11921c2c3027897ee1fbd58db7f8d3029fb857c4ed25cd7d6a95747fc5d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Image
Spec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699972787637653355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 083080137e96a65385e00b26b78226ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1ff31ca4-a527-4942-ab1d-5e68801df6a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.194932874Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7f7851dd-2d4d-4893-ad87-71fd5dc29ec6 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.195008720Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7f7851dd-2d4d-4893-ad87-71fd5dc29ec6 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.196718384Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=26516c7a-0455-4b4e-9739-82acf7608883 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.198221276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699972917198202567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:493912,},InodesUsed:&UInt64Value{Value:210,},},},}" file="go-grpc-middleware/chain.go:25" id=26516c7a-0455-4b4e-9739-82acf7608883 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.198877308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e1431bc8-21f1-432b-a379-56c226870bca name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.198934223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e1431bc8-21f1-432b-a379-56c226870bca name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:41:57 addons-317784 crio[714]: time="2023-11-14 14:41:57.199795636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:035b479e8e369e67a4b85455f52058d68db026c20973c9df83cf61a2aab96a21,PodSandboxId:fe7b8b729b0d48b63126569b48fc84563e79abdda45cf53974f71c120aaa75a3,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,State:CONTAINER_EXITED,CreatedAt:1699972914682281231,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-a752c059-4770-47b4-8afa-af875685de10,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 132e822a-d359-477c-a611-a01f2a006604,},Annotations:map[string]string{io.kubernetes.container.hash: b5a652c3,io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fba60347af66f773f71091522e4ccd9b712ad17ea15366ef193487f75e31fb85,PodSandboxId:3e18cfc2a73d34e620a7825fa7010d33f02cf29a2d5d54c71f635eb6a271ab8e,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,State:CONTAINER_EXITED,CreatedAt:1699972913989616111,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55c3e919-8947-4277-aeb1-45e8f263c870,},Annotations:map[string]string{io.kubernetes.container.hash: c82b1fe8,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a240297c919b798f7ed75e51be62f4350a3a0fd1e85d0d646d812c78db09d4,PodSandboxId:5c25d3f760ba9ec455817f0c3155d269a328a9df8ac8ed4bff9de3913d6f6f31,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1699972912435922597,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-lx8bp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f98e26b0-53b8-407a-9f98-712a0310b50a,},Annotations:map[string]string{io.kubernetes.container.hash: f93dc9b6
,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2806223cfd3a51a702cceab7f77047f86069498aa59c79e9231e882f780430,PodSandboxId:fbc912217deb20d88956a4c0ee7780f80c1493be2fc96b31f4b269ed9b99a0ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699972907481534968,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1305e1fa-41d3-4ccb-9590-a5da7f844175,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8ed09ef1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0,PodSandboxId:0b4739dda66af14458e9b7d8702dad5b58ebc11da3eae89b73d1f3861f18cff3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699972893944780241,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-fr8lj,io.kub
ernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0897976d-190b-43cb-886b-5711767f4b5c,},Annotations:map[string]string{io.kubernetes.container.hash: 23c8a73e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e901921b17ee9edee81f8856467c65d4c2a156b50b834133aed70f5f4b553ff0,PodSandboxId:2a5daeba2c2273d0cd7189040c3265d68e0f0b01fcba7a6d76ecf8855db58104,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5,State:CONTAINER_RUNNING,CreatedAt:1699972891484038182,Labels:map
[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7c6974c4d8-tzwkh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3f4fe5be-a92a-4711-9615-4091dbade91d,},Annotations:map[string]string{io.kubernetes.container.hash: ae52c92f,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68fe1d59991055a8df480aa58562cbbcd45a3c5fa21e4f0f4230cccad516ec5e,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&Cont
ainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1699972883851221536,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: f5323380,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1955ef47b5a9aaf706a32ecf9f9a5a26ed07244486e8c98a5704d0d1064555,PodSandboxId:572af360256790c181233151ca68
fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1699972881802440512,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: e0c8bcb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9eda30492e5fa9258718e37c889a5888100889cc239d2f75bb07b6
96854db7a,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1699972879633619855,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: a587c952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a601e09b
1d581217a534ad0b3018dbea455230fdedf899299ad4644ebae16b,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1699972878466308762,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 244ef8cd,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d63d2948c7de44c783714f4b136ee7d1fc7493dc29a091054a8d06edb9962e,PodSandboxId:1463081c9b3aec76a481fe360b9b14e5112d8f972f5a4dbcc123ae8ed9c6f6f8,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972878356942249,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cxw9h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fc761ba-8d27-4b75-86ac-042563877790,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea11d62,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554ca1db390278ba8653219550a19585378c02537c8ca104c43f9a5897d17080,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1699972876329922098,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b65f0,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40a6b8d04682e75791643ed51be7537e9212201fef7d103dcc24e72f6278b54,PodSandboxId:d66e3fea3c10523e37788f4a3652bb1f315dc7d1394fa2da0019afce526e6879,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1699972875096866600,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-7snd4,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 8326ade1-223f-41e0-97ec-47baa8cf5141,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f1d4b531,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1427ec2aeb0037721c590469d1e293f6799878744c80ce8ef2cde7f203e4918,PodSandboxId:edd550a1a7f4d5bc13c6653d7eb7ba151b7a0b9937d220f198617bd4581203cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972873538233706,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mp8tp,io.kubernetes.pod.namespace: ingress-nginx,io.kuberne
tes.pod.uid: 7bcda67e-c991-49a0-9a5a-7123473c3d67,},Annotations:map[string]string{io.kubernetes.container.hash: 560bc442,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4277a3ed9ccf549d22c8ee025bb9c5eadd8bb8c47ae5397c4fc2819e2caaf694,PodSandboxId:e78ed7cd1fd682dc45d93c9d666101c90a714653b931c068766dbf880c99e89b,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1699972873429341547,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.n
ame: snapshot-controller-58dbcc7b99-7t6pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb464b0-5361-435a-888d-ae86a377888d,},Annotations:map[string]string{io.kubernetes.container.hash: bd7f5b34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28,PodSandboxId:3b11d4daeaa4f4b8430dd6c39f07cba8c0f5553f396d8d9edece87939ee805db,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1699972873239895353,Labels:map[string]string{io.kubernet
es.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kh6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19bf641-561e-4422-b35c-1732be0e252d,},Annotations:map[string]string{io.kubernetes.container.hash: 975d5bc2,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7,PodSandboxId:8066931a75328a65077b53ed36d39e1e9633d10ccbfca158327d96e410bde4b3,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d3
00281a3d147a18ef08ae6fb079d150c,State:CONTAINER_RUNNING,CreatedAt:1699972868105284049,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-frqvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e840532-ea34-4155-9e28-d372f730759d,},Annotations:map[string]string{io.kubernetes.container.hash: b18b8f4f,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97616bc7850e432cc6f0894c11e7083fadbcd653a0d50fdae0cd50f92c0119f1,PodSandboxId:0f3af2697bd63d752bee32dd79ec3f601e42f2ddb744e211ff36637cb5f07edd,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f,Annotations:
map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f,State:CONTAINER_RUNNING,CreatedAt:1699972866681639590,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5649c69bf6-9phzq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e170d39e-44a4-47f3-8d7a-c33c0ab80af7,},Annotations:map[string]string{io.kubernetes.container.hash: f4706f7a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39a2a899a9e4f0983b1cfbf7c20c25550450f8524a937e6670c0890183bac29,PodSandboxId:ae68b4615f1188fd282292e008c129f4709f2d4e6b64f557ccd4
36d2bf680e8f,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1699972860142884608,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-zdcmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ad365-92c4-44cf-86e7-a36669bf2673,},Annotations:map[string]string{io.kubernetes.container.hash: b79aaca2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1c53ddd12dad9f5798577dacab6815144c6
d4735e7f7122ccdac3c25276ddc,PodSandboxId:572af360256790c181233151ca68fc74882513bf332ebca7d9a14e209db38c4e,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1699972857802792333,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z6dqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42e7b085-9279-42c4-90f9-6feff2ec6f1e,},Annotations:map[string]string{io.kubernetes.container.hash: 4a5ea6a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699972855737067781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f989de2637865a8a1a67e274eb3ebec6baaa4ac0f648ba9b9e95eec8b0594a7,PodSandboxId:d618ca4c98652eb072f68a83bab6c5d6b7fc1de18f215f325c532a7f78724e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINER_RUNNING,CreatedAt:1699972854510276860,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-jkrcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb043b53-5f93-4088-8ba6-93d4d706390a,},Annotations:map[string]string{io.kubernetes.container.hash: f29ccb9d,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dde2185933daf6048c68d2486eb0168f5ec58201e38dc7d851e4c50d06601e2,PodSandboxId:d67c80f97f98a68188bf10f89d01855242bcc766fb974f76e1938ed59539cf15,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1699972846595101729,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed567ba-0020-4621-bada-2a846f0f47a3,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6b5d29c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b3c6d74248251c00ee108bfabc05a066f9efcc5d704dc0026d62c6908c5fc8,PodSandboxId:c8b258085197fea6cd333aba77aa14edc9a5c35b48f5b9ec693ff172ac5ff4d2,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1699972844850964604,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1487b-0aca-47f1-94c6-c98baaf75535,},Annotations:map
[string]string{io.kubernetes.container.hash: 812afc87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4138b20e23b838467e8f60bb5d78ac109293c242c7c50436c48464e62f0ce7b5,PodSandboxId:d348268b754103eeb74d95d66288425c7f99e4396cd419cacf8c6623ebd53dd2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001,State:CONTAINER_RUNNING,CreatedAt:1699972842157081755,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-jhtw6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: afeb4122-4e14-4945-b56a-2c9b08c47a5f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 3054ce92,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3,PodSandboxId:a924fafac788f15bb085191cd6e8180c96b3a1eca23ea76905d4fbf43f224220,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1699972832712217297,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,
io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b,},Annotations:map[string]string{io.kubernetes.container.hash: 1284627f,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedA
t:1699972823393746785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653,PodSandboxId:45e1b5dfd57fb6a82633547a42908a1ca3b2260ab9800eb09a1cc5f549a01510,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699972822763333368,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5jq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bff1d5-3968-493a-b332-d360861a5698,},Annotations:map[string]string{io.kubernetes.container.hash: 49d9aeb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa69a346cb72279384c9e23ced7bfbfba1d1c3fdd1a36049f8d4cf280b38c293,PodSandboxId:d618ca4c98652eb072f68a83bab6c5d6b7fc1de18f215f325c532a7f78724e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINE
R_EXITED,CreatedAt:1699972821592950866,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-jkrcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb043b53-5f93-4088-8ba6-93d4d706390a,},Annotations:map[string]string{io.kubernetes.container.hash: f29ccb9d,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4,PodSandboxId:b64c334306ed07fbcebfa42abe0acf9bf23f241844ecce0d652f8fefb6c8f08c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha
256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699972814159004307,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-97twm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24724bed-9f9e-4ce6-b359-dd22bf06d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: cb0ddfca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f9b0cc72b7becbdf494fe2748caed70a4e53672c513c7b0ff2fe2eb2e4fb02,PodSandboxId:f35540a56b98cf09c5906b2080b4af1c8ce4a5e5465fc9a58140a6d7476bf191,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699972788074196332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b55d3601f9ab50d0fccd5e81d0057b,},Annotations:map[string]string{io.kubernetes.container.hash: bdb6ecd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf,PodSandboxId:cadfaa6eb6060368994409f96fa9fd872f0f084b9b24ea81d3bdeaa027896cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699972788124380935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 835de15b6e6cd8d1adf2d3d351772b5f,},Annotations:map[string]string{io.kubernetes.container.hash: d88cb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887,PodSandboxId:a3cb989518dfa9522097ea174fff2ad7af956bbc8d87eece8731c6958e4bb24d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:1
0baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699972787960388189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b942e929c440df9df70fd6ab79e131a8,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539,PodSandboxId:f27f11921c2c3027897ee1fbd58db7f8d3029fb857c4ed25cd7d6a95747fc5d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Image
Spec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699972787637653355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 083080137e96a65385e00b26b78226ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e1431bc8-21f1-432b-a379-56c226870bca name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	035b479e8e369       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            2 seconds ago        Exited              helper-pod                               0                   fe7b8b729b0d4       helper-pod-create-pvc-a752c059-4770-47b4-8afa-af875685de10
	fba60347af66f       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          3 seconds ago        Exited              registry-test                            0                   3e18cfc2a73d3       registry-test
	f9a240297c919       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                                        4 seconds ago        Running             headlamp                                 0                   5c25d3f760ba9       headlamp-777fd4b855-lx8bp
	ab2806223cfd3       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                                              9 seconds ago        Running             nginx                                    0                   fbc912217deb2       nginx
	b6a01da282f10       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 23 seconds ago       Running             gcp-auth                                 0                   0b4739dda66af       gcp-auth-d4c87556c-fr8lj
	e901921b17ee9       registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5                             25 seconds ago       Running             controller                               0                   2a5daeba2c227       ingress-nginx-controller-7c6974c4d8-tzwkh
	68fe1d5999105       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          33 seconds ago       Running             csi-snapshotter                          0                   572af36025679       csi-hostpathplugin-z6dqk
	6d1955ef47b5a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          35 seconds ago       Running             csi-provisioner                          0                   572af36025679       csi-hostpathplugin-z6dqk
	c9eda30492e5f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            37 seconds ago       Running             liveness-probe                           0                   572af36025679       csi-hostpathplugin-z6dqk
	e9a601e09b1d5       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           38 seconds ago       Running             hostpath                                 0                   572af36025679       csi-hostpathplugin-z6dqk
	87d63d2948c7d       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                                             38 seconds ago       Exited              patch                                    3                   1463081c9b3ae       ingress-nginx-admission-patch-cxw9h
	554ca1db39027       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                40 seconds ago       Running             node-driver-registrar                    0                   572af36025679       csi-hostpathplugin-z6dqk
	c40a6b8d04682       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             42 seconds ago       Running             local-path-provisioner                   0                   d66e3fea3c105       local-path-provisioner-78b46b4d5c-7snd4
	f1427ec2aeb00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   43 seconds ago       Exited              create                                   0                   edd550a1a7f4d       ingress-nginx-admission-create-mp8tp
	4277a3ed9ccf5       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      43 seconds ago       Running             volume-snapshot-controller               0                   e78ed7cd1fd68       snapshot-controller-58dbcc7b99-7t6pq
	18faf6d4d568a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5                              44 seconds ago       Running             registry-proxy                           0                   3b11d4daeaa4f       registry-proxy-kh6p9
	394f9ede26bc7       docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c                                           49 seconds ago       Running             registry                                 0                   8066931a75328       registry-frqvq
	97616bc7850e4       gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f                               50 seconds ago       Running             cloud-spanner-emulator                   0                   0f3af2697bd63       cloud-spanner-emulator-5649c69bf6-9phzq
	d39a2a899a9e4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      57 seconds ago       Running             volume-snapshot-controller               0                   ae68b4615f118       snapshot-controller-58dbcc7b99-zdcmh
	7c1c53ddd12da       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   59 seconds ago       Running             csi-external-health-monitor-controller   0                   572af36025679       csi-hostpathplugin-z6dqk
	226be02a2e442       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      1                   e1e9062a537fc       storage-provisioner
	2f989de263786       a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57                                                                             About a minute ago   Running             metrics-server                           1                   d618ca4c98652       metrics-server-7c66d45ddc-jkrcj
	1dde2185933da       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   d67c80f97f98a       csi-hostpath-attacher-0
	15b3c6d742482       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   c8b258085197f       csi-hostpath-resizer-0
	4138b20e23b83       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5d452e5688fcbb2c574cde7eff329f655e8b84e7d7da583b69280cfb6ea82001                            About a minute ago   Running             gadget                                   0                   d348268b75410       gadget-jhtw6
	d6db3ccc8731e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago   Running             minikube-ingress-dns                     0                   a924fafac788f       kube-ingress-dns-minikube
	14755bac67833       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Exited              storage-provisioner                      0                   e1e9062a537fc       storage-provisioner
	cea1861be3ae8       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                                             About a minute ago   Running             kube-proxy                               0                   45e1b5dfd57fb       kube-proxy-5jq48
	aa69a346cb722       registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21                        About a minute ago   Exited              metrics-server                           0                   d618ca4c98652       metrics-server-7c66d45ddc-jkrcj
	09b803467d9c5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   b64c334306ed0       coredns-5dd5756b68-97twm
	ba4a05a0c7a22       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                                             2 minutes ago        Running             kube-apiserver                           0                   cadfaa6eb6060       kube-apiserver-addons-317784
	c1f9b0cc72b7b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   f35540a56b98c       etcd-addons-317784
	505ab9c4cf6d2       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                                             2 minutes ago        Running             kube-controller-manager                  0                   a3cb989518dfa       kube-controller-manager-addons-317784
	dff28b8dc980b       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                                             2 minutes ago        Running             kube-scheduler                           0                   f27f11921c2c3       kube-scheduler-addons-317784
	
	* 
	* ==> coredns [09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4] <==
	* [INFO] 10.244.0.9:41346 - 19020 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131778s
	[INFO] 10.244.0.9:58603 - 61307 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071162s
	[INFO] 10.244.0.9:58603 - 63870 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000265287s
	[INFO] 10.244.0.9:50521 - 2467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000439468s
	[INFO] 10.244.0.9:50521 - 17309 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075679s
	[INFO] 10.244.0.9:34498 - 13348 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00984641s
	[INFO] 10.244.0.9:34498 - 64042 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000121428s
	[INFO] 10.244.0.9:35561 - 28704 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000580894s
	[INFO] 10.244.0.9:35561 - 40996 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000229485s
	[INFO] 10.244.0.9:60718 - 58845 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104188s
	[INFO] 10.244.0.9:60718 - 43396 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110913s
	[INFO] 10.244.0.9:49511 - 11468 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050358s
	[INFO] 10.244.0.9:49511 - 42446 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039897s
	[INFO] 10.244.0.9:38290 - 28507 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050005s
	[INFO] 10.244.0.9:38290 - 14681 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000202228s
	[INFO] 10.244.0.21:40537 - 4376 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000263958s
	[INFO] 10.244.0.21:42553 - 23185 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000081253s
	[INFO] 10.244.0.21:57194 - 2973 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014384s
	[INFO] 10.244.0.21:48501 - 9857 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089562s
	[INFO] 10.244.0.21:33440 - 31784 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103644s
	[INFO] 10.244.0.21:51334 - 21085 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058908s
	[INFO] 10.244.0.21:43251 - 14905 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000753986s
	[INFO] 10.244.0.21:36823 - 28834 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000367469s
	[INFO] 10.244.0.25:40537 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00023642s
	[INFO] 10.244.0.25:53536 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173751s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-317784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-317784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=addons-317784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T14_39_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-317784
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-317784"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 14:39:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-317784
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 14:41:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:41:27 +0000   Tue, 14 Nov 2023 14:39:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:41:27 +0000   Tue, 14 Nov 2023 14:39:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:41:27 +0000   Tue, 14 Nov 2023 14:39:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:41:27 +0000   Tue, 14 Nov 2023 14:39:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    addons-317784
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 a83e3ef3393c4c0ebdac4f3d3aadc38f
	  System UUID:                a83e3ef3-393c-4c0e-bdac-4f3d3aadc38f
	  Boot ID:                    244a92c1-0d37-446c-b5f0-87cca554f62d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-9phzq      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  gadget                      gadget-jhtw6                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  gcp-auth                    gcp-auth-d4c87556c-fr8lj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  headlamp                    headlamp-777fd4b855-lx8bp                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-tzwkh    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         101s
	  kube-system                 coredns-5dd5756b68-97twm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     109s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 csi-hostpathplugin-z6dqk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 etcd-addons-317784                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-apiserver-addons-317784                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-controller-manager-addons-317784        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-5jq48                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-scheduler-addons-317784                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 metrics-server-7c66d45ddc-jkrcj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 registry-frqvq                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 registry-proxy-kh6p9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 snapshot-controller-58dbcc7b99-7t6pq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 snapshot-controller-58dbcc7b99-zdcmh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  local-path-storage          local-path-provisioner-78b46b4d5c-7snd4      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             460Mi (12%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 91s   kube-proxy       
	  Normal  Starting                 2m3s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s  kubelet          Node addons-317784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s  kubelet          Node addons-317784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s  kubelet          Node addons-317784 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m2s  kubelet          Node addons-317784 status is now: NodeReady
	  Normal  RegisteredNode           110s  node-controller  Node addons-317784 event: Registered Node addons-317784 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov14 14:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093973] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.378192] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.375418] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147064] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.040265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.040617] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.102013] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136935] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.102659] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.215677] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +11.387383] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +8.257742] systemd-fstab-generator[1244]: Ignoring "noauto" for root device
	[Nov14 14:40] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.014432] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.027775] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.883546] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.618813] kauditd_printk_skb: 14 callbacks suppressed
	[Nov14 14:41] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.098119] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [c1f9b0cc72b7becbdf494fe2748caed70a4e53672c513c7b0ff2fe2eb2e4fb02] <==
	* {"level":"info","ts":"2023-11-14T14:41:10.231412Z","caller":"traceutil/trace.go:171","msg":"trace[1992998201] transaction","detail":"{read_only:false; response_revision:979; number_of_response:1; }","duration":"287.711996ms","start":"2023-11-14T14:41:09.943694Z","end":"2023-11-14T14:41:10.231406Z","steps":["trace[1992998201] 'process raft request'  (duration: 287.539946ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:41:10.231659Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.035462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10846"}
	{"level":"info","ts":"2023-11-14T14:41:10.231679Z","caller":"traceutil/trace.go:171","msg":"trace[570565576] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:979; }","duration":"238.069366ms","start":"2023-11-14T14:41:09.993603Z","end":"2023-11-14T14:41:10.231673Z","steps":["trace[570565576] 'agreement among raft nodes before linearized reading'  (duration: 237.955737ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T14:41:18.149603Z","caller":"traceutil/trace.go:171","msg":"trace[1950279856] linearizableReadLoop","detail":"{readStateIndex:1075; appliedIndex:1074; }","duration":"299.937631ms","start":"2023-11-14T14:41:17.849653Z","end":"2023-11-14T14:41:18.149591Z","steps":["trace[1950279856] 'read index received'  (duration: 299.765661ms)","trace[1950279856] 'applied index is now lower than readState.Index'  (duration: 171.395µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-14T14:41:18.149806Z","caller":"traceutil/trace.go:171","msg":"trace[1753596748] transaction","detail":"{read_only:false; response_revision:1045; number_of_response:1; }","duration":"364.05637ms","start":"2023-11-14T14:41:17.785741Z","end":"2023-11-14T14:41:18.149797Z","steps":["trace[1753596748] 'process raft request'  (duration: 363.715181ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:41:18.14992Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T14:41:17.785727Z","time spent":"364.135413ms","remote":"127.0.0.1:37490","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":932,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cxw9h.1797844963526f9e\" mod_revision:879 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cxw9h.1797844963526f9e\" value_size:831 lease:1163208321825825402 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-cxw9h.1797844963526f9e\" > >"}
	{"level":"warn","ts":"2023-11-14T14:41:18.149966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.346251ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T14:41:18.150022Z","caller":"traceutil/trace.go:171","msg":"trace[65005257] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1045; }","duration":"300.398403ms","start":"2023-11-14T14:41:17.849616Z","end":"2023-11-14T14:41:18.150014Z","steps":["trace[65005257] 'agreement among raft nodes before linearized reading'  (duration: 300.337192ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:41:18.150048Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T14:41:17.849597Z","time spent":"300.445614ms","remote":"127.0.0.1:37462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-11-14T14:41:18.149931Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.282635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10948"}
	{"level":"info","ts":"2023-11-14T14:41:18.150466Z","caller":"traceutil/trace.go:171","msg":"trace[1125215588] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1045; }","duration":"156.823887ms","start":"2023-11-14T14:41:17.993632Z","end":"2023-11-14T14:41:18.150456Z","steps":["trace[1125215588] 'agreement among raft nodes before linearized reading'  (duration: 156.186418ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T14:41:27.008583Z","caller":"traceutil/trace.go:171","msg":"trace[1147004496] linearizableReadLoop","detail":"{readStateIndex:1122; appliedIndex:1121; }","duration":"316.302199ms","start":"2023-11-14T14:41:26.692266Z","end":"2023-11-14T14:41:27.008568Z","steps":["trace[1147004496] 'read index received'  (duration: 315.011836ms)","trace[1147004496] 'applied index is now lower than readState.Index'  (duration: 1.28968ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-14T14:41:27.008868Z","caller":"traceutil/trace.go:171","msg":"trace[1881180835] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"442.679445ms","start":"2023-11-14T14:41:26.566179Z","end":"2023-11-14T14:41:27.008858Z","steps":["trace[1881180835] 'process raft request'  (duration: 441.017746ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:41:27.008975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T14:41:26.566102Z","time spent":"442.820046ms","remote":"127.0.0.1:37536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-317784\" mod_revision:1033 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-317784\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-317784\" > >"}
	{"level":"warn","ts":"2023-11-14T14:41:27.009185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.957285ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13861"}
	{"level":"info","ts":"2023-11-14T14:41:27.009211Z","caller":"traceutil/trace.go:171","msg":"trace[848038853] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1090; }","duration":"316.987871ms","start":"2023-11-14T14:41:26.692216Z","end":"2023-11-14T14:41:27.009204Z","steps":["trace[848038853] 'agreement among raft nodes before linearized reading'  (duration: 316.857452ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:41:27.00923Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T14:41:26.692203Z","time spent":"317.022675ms","remote":"127.0.0.1:37518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13884,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2023-11-14T14:41:27.009394Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.736494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T14:41:27.009511Z","caller":"traceutil/trace.go:171","msg":"trace[1894288230] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1090; }","duration":"159.843458ms","start":"2023-11-14T14:41:26.849644Z","end":"2023-11-14T14:41:27.009487Z","steps":["trace[1894288230] 'agreement among raft nodes before linearized reading'  (duration: 159.72045ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T14:41:30.629956Z","caller":"traceutil/trace.go:171","msg":"trace[1998181659] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"214.596352ms","start":"2023-11-14T14:41:30.415345Z","end":"2023-11-14T14:41:30.629941Z","steps":["trace[1998181659] 'process raft request'  (duration: 214.467637ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:41:30.63225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.265651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-11-14T14:41:30.632312Z","caller":"traceutil/trace.go:171","msg":"trace[1526788742] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1095; }","duration":"159.337168ms","start":"2023-11-14T14:41:30.472964Z","end":"2023-11-14T14:41:30.632301Z","steps":["trace[1526788742] 'agreement among raft nodes before linearized reading'  (duration: 159.20187ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T14:41:30.632257Z","caller":"traceutil/trace.go:171","msg":"trace[1226340439] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"159.249558ms","start":"2023-11-14T14:41:30.472988Z","end":"2023-11-14T14:41:30.632237Z","steps":["trace[1226340439] 'read index received'  (duration: 159.000396ms)","trace[1226340439] 'applied index is now lower than readState.Index'  (duration: 247.526µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-14T14:41:30.633961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.390237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10948"}
	{"level":"info","ts":"2023-11-14T14:41:30.634014Z","caller":"traceutil/trace.go:171","msg":"trace[1382020109] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1095; }","duration":"140.44875ms","start":"2023-11-14T14:41:30.493558Z","end":"2023-11-14T14:41:30.634006Z","steps":["trace[1382020109] 'agreement among raft nodes before linearized reading'  (duration: 140.354087ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0] <==
	* 2023/11/14 14:41:34 GCP Auth Webhook started!
	2023/11/14 14:41:39 Ready to marshal response ...
	2023/11/14 14:41:39 Ready to write response ...
	2023/11/14 14:41:42 Ready to marshal response ...
	2023/11/14 14:41:42 Ready to write response ...
	2023/11/14 14:41:42 Ready to marshal response ...
	2023/11/14 14:41:42 Ready to write response ...
	2023/11/14 14:41:42 Ready to marshal response ...
	2023/11/14 14:41:42 Ready to write response ...
	2023/11/14 14:41:42 Ready to marshal response ...
	2023/11/14 14:41:42 Ready to write response ...
	2023/11/14 14:41:44 Ready to marshal response ...
	2023/11/14 14:41:44 Ready to write response ...
	2023/11/14 14:41:47 Ready to marshal response ...
	2023/11/14 14:41:47 Ready to write response ...
	2023/11/14 14:41:47 Ready to marshal response ...
	2023/11/14 14:41:47 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  14:41:57 up 2 min,  0 users,  load average: 5.66, 2.68, 1.03
	Linux addons-317784 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf] <==
	* W1114 14:40:19.090642       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 14:40:20.023324       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.31.190"}
	I1114 14:40:21.271094       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 14:40:26.272575       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 14:40:51.372816       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 14:40:56.621743       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 14:40:56.622341       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1114 14:40:56.622646       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.128.171:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.128.171:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.128.171:443: connect: connection refused
	E1114 14:40:56.626547       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.128.171:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.128.171:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.128.171:443: connect: connection refused
	I1114 14:40:56.626636       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1114 14:40:56.630635       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.128.171:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.128.171:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.128.171:443: connect: connection refused
	E1114 14:40:56.655924       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.128.171:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.128.171:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.128.171:443: connect: connection refused
	I1114 14:40:56.923615       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 14:41:06.293985       1 trace.go:236] Trace[2051589026]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:39823908-f791-4a14-b3d5-8cf37b5335e6,client:192.168.39.16,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/ingress-nginx/pods/ingress-nginx-admission-patch-cxw9h/status,user-agent:kubelet/v1.28.3 (linux/amd64) kubernetes/a8a1abc,verb:PATCH (14-Nov-2023 14:41:05.784) (total time: 508ms):
	Trace[2051589026]: ["GuaranteedUpdate etcd3" audit-id:39823908-f791-4a14-b3d5-8cf37b5335e6,key:/pods/ingress-nginx/ingress-nginx-admission-patch-cxw9h,type:*core.Pod,resource:pods 508ms (14:41:05.785)
	Trace[2051589026]:  ---"Txn call completed" 499ms (14:41:06.286)]
	Trace[2051589026]: ---"Object stored in database" 499ms (14:41:06.286)
	Trace[2051589026]: [508.994179ms] [508.994179ms] END
	I1114 14:41:42.394368       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1114 14:41:42.676382       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.177.195"}
	I1114 14:41:42.756575       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.95.233"}
	E1114 14:41:43.997079       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.16:8443->10.244.0.22:46814: read: connection reset by peer
	I1114 14:41:51.376584       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887] <==
	* I1114 14:41:32.381078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="54.503µs"
	I1114 14:41:34.381726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="11.278256ms"
	I1114 14:41:34.381892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="49.696µs"
	I1114 14:41:41.087876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="29.938155ms"
	I1114 14:41:41.088726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="631.628µs"
	I1114 14:41:42.821322       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-777fd4b855 to 1"
	I1114 14:41:42.844947       1 event.go:307] "Event occurred" object="headlamp/headlamp-777fd4b855" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-777fd4b855-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I1114 14:41:42.876844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="55.79439ms"
	E1114 14:41:42.877078       1 replica_set.go:557] sync "headlamp/headlamp-777fd4b855" failed with pods "headlamp-777fd4b855-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I1114 14:41:42.925963       1 event.go:307] "Event occurred" object="headlamp/headlamp-777fd4b855" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-777fd4b855-lx8bp"
	I1114 14:41:42.959389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="77.27832ms"
	I1114 14:41:43.003611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="44.093448ms"
	I1114 14:41:43.003850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="98.369µs"
	I1114 14:41:43.033199       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1114 14:41:43.077818       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1114 14:41:46.016838       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1114 14:41:46.127612       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1114 14:41:46.948874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="6.625µs"
	I1114 14:41:47.373970       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1114 14:41:47.595623       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1114 14:41:47.595682       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1114 14:41:52.478025       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1114 14:41:53.252184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="60.815µs"
	I1114 14:41:53.295089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="11.292947ms"
	I1114 14:41:53.295574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-777fd4b855" duration="66.462µs"
	
	* 
	* ==> kube-proxy [cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653] <==
	* I1114 14:40:24.627085       1 server_others.go:69] "Using iptables proxy"
	I1114 14:40:25.293037       1 node.go:141] Successfully retrieved node IP: 192.168.39.16
	I1114 14:40:25.734833       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 14:40:25.734935       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 14:40:25.836357       1 server_others.go:152] "Using iptables Proxier"
	I1114 14:40:25.836440       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 14:40:25.836695       1 server.go:846] "Version info" version="v1.28.3"
	I1114 14:40:25.836706       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 14:40:25.851680       1 config.go:188] "Starting service config controller"
	I1114 14:40:25.851838       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 14:40:25.851880       1 config.go:97] "Starting endpoint slice config controller"
	I1114 14:40:25.851884       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 14:40:25.856832       1 config.go:315] "Starting node config controller"
	I1114 14:40:25.856843       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 14:40:25.952009       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 14:40:25.952252       1 shared_informer.go:318] Caches are synced for service config
	I1114 14:40:25.960853       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539] <==
	* W1114 14:39:51.529533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1114 14:39:51.530067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 14:39:51.530171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 14:39:51.530227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1114 14:39:51.530271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 14:39:51.531565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 14:39:51.531572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 14:39:51.531683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 14:39:51.531839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 14:39:51.531925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:39:51.531993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 14:39:51.532063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1114 14:39:52.342331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 14:39:52.342358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1114 14:39:52.481787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 14:39:52.481844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1114 14:39:52.569316       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 14:39:52.569366       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 14:39:52.601348       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:39:52.601406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 14:39:52.634998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 14:39:52.635048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 14:39:52.679528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 14:39:52.679582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1114 14:39:55.503840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 14:39:21 UTC, ends at Tue 2023-11-14 14:41:58 UTC. --
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.256884    1251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e18cfc2a73d34e620a7825fa7010d33f02cf29a2d5d54c71f635eb6a271ab8e"
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.442007    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/132e822a-d359-477c-a611-a01f2a006604-data\") pod \"132e822a-d359-477c-a611-a01f2a006604\" (UID: \"132e822a-d359-477c-a611-a01f2a006604\") "
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.442048    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/132e822a-d359-477c-a611-a01f2a006604-script\") pod \"132e822a-d359-477c-a611-a01f2a006604\" (UID: \"132e822a-d359-477c-a611-a01f2a006604\") "
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.442081    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2bjk\" (UniqueName: \"kubernetes.io/projected/132e822a-d359-477c-a611-a01f2a006604-kube-api-access-q2bjk\") pod \"132e822a-d359-477c-a611-a01f2a006604\" (UID: \"132e822a-d359-477c-a611-a01f2a006604\") "
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.442098    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/132e822a-d359-477c-a611-a01f2a006604-gcp-creds\") pod \"132e822a-d359-477c-a611-a01f2a006604\" (UID: \"132e822a-d359-477c-a611-a01f2a006604\") "
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.442314    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/132e822a-d359-477c-a611-a01f2a006604-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "132e822a-d359-477c-a611-a01f2a006604" (UID: "132e822a-d359-477c-a611-a01f2a006604"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.442341    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/132e822a-d359-477c-a611-a01f2a006604-data" (OuterVolumeSpecName: "data") pod "132e822a-d359-477c-a611-a01f2a006604" (UID: "132e822a-d359-477c-a611-a01f2a006604"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.442699    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/132e822a-d359-477c-a611-a01f2a006604-script" (OuterVolumeSpecName: "script") pod "132e822a-d359-477c-a611-a01f2a006604" (UID: "132e822a-d359-477c-a611-a01f2a006604"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.450321    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132e822a-d359-477c-a611-a01f2a006604-kube-api-access-q2bjk" (OuterVolumeSpecName: "kube-api-access-q2bjk") pod "132e822a-d359-477c-a611-a01f2a006604" (UID: "132e822a-d359-477c-a611-a01f2a006604"). InnerVolumeSpecName "kube-api-access-q2bjk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.543333    1251 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/132e822a-d359-477c-a611-a01f2a006604-script\") on node \"addons-317784\" DevicePath \"\""
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.543398    1251 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q2bjk\" (UniqueName: \"kubernetes.io/projected/132e822a-d359-477c-a611-a01f2a006604-kube-api-access-q2bjk\") on node \"addons-317784\" DevicePath \"\""
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.543410    1251 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/132e822a-d359-477c-a611-a01f2a006604-gcp-creds\") on node \"addons-317784\" DevicePath \"\""
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.543421    1251 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/132e822a-d359-477c-a611-a01f2a006604-data\") on node \"addons-317784\" DevicePath \"\""
	Nov 14 14:41:56 addons-317784 kubelet[1251]: I1114 14:41:56.786296    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="55c3e919-8947-4277-aeb1-45e8f263c870" path="/var/lib/kubelet/pods/55c3e919-8947-4277-aeb1-45e8f263c870/volumes"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: I1114 14:41:57.265030    1251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7b8b729b0d48b63126569b48fc84563e79abdda45cf53974f71c120aaa75a3"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: I1114 14:41:57.595725    1251 topology_manager.go:215] "Topology Admit Handler" podUID="048ea378-0095-4054-8a33-0e00d927fe77" podNamespace="default" podName="test-local-path"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: E1114 14:41:57.595797    1251 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="930fbb39-4b02-4205-8c93-f43026252d00" containerName="tiller"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: E1114 14:41:57.595808    1251 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55c3e919-8947-4277-aeb1-45e8f263c870" containerName="registry-test"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: E1114 14:41:57.595815    1251 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="132e822a-d359-477c-a611-a01f2a006604" containerName="helper-pod"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: I1114 14:41:57.595848    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="930fbb39-4b02-4205-8c93-f43026252d00" containerName="tiller"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: I1114 14:41:57.595856    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="55c3e919-8947-4277-aeb1-45e8f263c870" containerName="registry-test"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: I1114 14:41:57.595863    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="132e822a-d359-477c-a611-a01f2a006604" containerName="helper-pod"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: I1114 14:41:57.652698    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a752c059-4770-47b4-8afa-af875685de10\" (UniqueName: \"kubernetes.io/host-path/048ea378-0095-4054-8a33-0e00d927fe77-pvc-a752c059-4770-47b4-8afa-af875685de10\") pod \"test-local-path\" (UID: \"048ea378-0095-4054-8a33-0e00d927fe77\") " pod="default/test-local-path"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: I1114 14:41:57.652755    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5pw8\" (UniqueName: \"kubernetes.io/projected/048ea378-0095-4054-8a33-0e00d927fe77-kube-api-access-z5pw8\") pod \"test-local-path\" (UID: \"048ea378-0095-4054-8a33-0e00d927fe77\") " pod="default/test-local-path"
	Nov 14 14:41:57 addons-317784 kubelet[1251]: I1114 14:41:57.652790    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/048ea378-0095-4054-8a33-0e00d927fe77-gcp-creds\") pod \"test-local-path\" (UID: \"048ea378-0095-4054-8a33-0e00d927fe77\") " pod="default/test-local-path"
	
	* 
	* ==> storage-provisioner [14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db] <==
	* I1114 14:40:24.346850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1114 14:40:54.392047       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514] <==
	* I1114 14:40:56.131435       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 14:40:56.148785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 14:40:56.148927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 14:40:56.169906       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 14:40:56.172213       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-317784_8a56bc5e-916b-4506-ba59-40b1e3ec7ba5!
	I1114 14:40:56.182208       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d057e82-a032-438c-96d7-82fbcaa8824b", APIVersion:"v1", ResourceVersion:"903", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-317784_8a56bc5e-916b-4506-ba59-40b1e3ec7ba5 became leader
	I1114 14:40:56.274510       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-317784_8a56bc5e-916b-4506-ba59-40b1e3ec7ba5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-317784 -n addons-317784
helpers_test.go:261: (dbg) Run:  kubectl --context addons-317784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path ingress-nginx-admission-create-mp8tp ingress-nginx-admission-patch-cxw9h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-317784 describe pod test-local-path ingress-nginx-admission-create-mp8tp ingress-nginx-admission-patch-cxw9h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-317784 describe pod test-local-path ingress-nginx-admission-create-mp8tp ingress-nginx-admission-patch-cxw9h: exit status 1 (105.972188ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-317784/192.168.39.16
	Start Time:       Tue, 14 Nov 2023 14:41:57 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5pw8 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-z5pw8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/test-local-path to addons-317784

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mp8tp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cxw9h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-317784 describe pod test-local-path ingress-nginx-admission-create-mp8tp ingress-nginx-admission-patch-cxw9h: exit status 1
--- FAIL: TestAddons/parallel/Registry (24.27s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (162.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-317784 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context addons-317784 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (6.477217683s)
addons_test.go:231: (dbg) Run:  kubectl --context addons-317784 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:231: (dbg) Done: kubectl --context addons-317784 replace --force -f testdata/nginx-ingress-v1.yaml: (1.348646064s)
addons_test.go:244: (dbg) Run:  kubectl --context addons-317784 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1305e1fa-41d3-4ccb-9590-a5da7f844175] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1305e1fa-41d3-4ccb-9590-a5da7f844175] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.030646805s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-317784 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.317527851s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-317784 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.16
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-317784 addons disable ingress-dns --alsologtostderr -v=1: (1.328270669s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-317784 addons disable ingress --alsologtostderr -v=1: (7.855108648s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-317784 -n addons-317784
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-317784 logs -n 25: (1.457024434s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |                     |
	|         | -p download-only-430804                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:39 UTC |
	| delete  | -p download-only-430804                                                                     | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:39 UTC |
	| delete  | -p download-only-430804                                                                     | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-886653 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |                     |
	|         | binary-mirror-886653                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44247                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-886653                                                                     | binary-mirror-886653 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:39 UTC |
	| addons  | enable dashboard -p                                                                         | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |                     |
	|         | addons-317784                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |                     |
	|         | addons-317784                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-317784 --wait=true                                                                | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC | 14 Nov 23 14:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC | 14 Nov 23 14:41 UTC |
	|         | -p addons-317784                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC | 14 Nov 23 14:41 UTC |
	|         | -p addons-317784                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-317784 addons disable                                                                | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC | 14 Nov 23 14:41 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-317784 ssh curl -s                                                                   | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-317784 ip                                                                            | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC | 14 Nov 23 14:41 UTC |
	| addons  | addons-317784 addons disable                                                                | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:41 UTC |                     |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-317784 ssh cat                                                                       | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:42 UTC | 14 Nov 23 14:42 UTC |
	|         | /opt/local-path-provisioner/pvc-a752c059-4770-47b4-8afa-af875685de10_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-317784 addons disable                                                                | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:42 UTC | 14 Nov 23 14:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:42 UTC | 14 Nov 23 14:42 UTC |
	|         | addons-317784                                                                               |                      |         |         |                     |                     |
	| addons  | addons-317784 addons                                                                        | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:42 UTC | 14 Nov 23 14:42 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:42 UTC | 14 Nov 23 14:42 UTC |
	|         | addons-317784                                                                               |                      |         |         |                     |                     |
	| addons  | addons-317784 addons                                                                        | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:42 UTC | 14 Nov 23 14:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-317784 addons                                                                        | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:42 UTC | 14 Nov 23 14:42 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-317784 ip                                                                            | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:44 UTC | 14 Nov 23 14:44 UTC |
	| addons  | addons-317784 addons disable                                                                | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:44 UTC | 14 Nov 23 14:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-317784 addons disable                                                                | addons-317784        | jenkins | v1.32.0 | 14 Nov 23 14:44 UTC | 14 Nov 23 14:44 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 14:39:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 14:39:09.052952  832572 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:39:09.053099  832572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:39:09.053107  832572 out.go:309] Setting ErrFile to fd 2...
	I1114 14:39:09.053115  832572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:39:09.053344  832572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 14:39:09.053994  832572 out.go:303] Setting JSON to false
	I1114 14:39:09.055580  832572 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":40901,"bootTime":1699931848,"procs":894,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 14:39:09.055673  832572 start.go:138] virtualization: kvm guest
	I1114 14:39:09.058095  832572 out.go:177] * [addons-317784] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 14:39:09.059991  832572 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 14:39:09.059980  832572 notify.go:220] Checking for updates...
	I1114 14:39:09.061689  832572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:39:09.063158  832572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:39:09.064456  832572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:39:09.065697  832572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 14:39:09.066965  832572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:39:09.068482  832572 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:39:09.099552  832572 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 14:39:09.100844  832572 start.go:298] selected driver: kvm2
	I1114 14:39:09.100859  832572 start.go:902] validating driver "kvm2" against <nil>
	I1114 14:39:09.100873  832572 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:39:09.101844  832572 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:39:09.102024  832572 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 14:39:09.116399  832572 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 14:39:09.116466  832572 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 14:39:09.116719  832572 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 14:39:09.116815  832572 cni.go:84] Creating CNI manager for ""
	I1114 14:39:09.116832  832572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:39:09.116848  832572 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1114 14:39:09.116860  832572 start_flags.go:323] config:
	{Name:addons-317784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-317784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:39:09.117040  832572 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:39:09.118890  832572 out.go:177] * Starting control plane node addons-317784 in cluster addons-317784
	I1114 14:39:09.120112  832572 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:39:09.120149  832572 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 14:39:09.120163  832572 cache.go:56] Caching tarball of preloaded images
	I1114 14:39:09.120252  832572 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 14:39:09.120266  832572 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 14:39:09.120699  832572 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/config.json ...
	I1114 14:39:09.120727  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/config.json: {Name:mk6b3b140c9356d26ddf8c22aad8ca9884759df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:09.120911  832572 start.go:365] acquiring machines lock for addons-317784: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 14:39:09.120983  832572 start.go:369] acquired machines lock for "addons-317784" in 55.229µs
	I1114 14:39:09.121013  832572 start.go:93] Provisioning new machine with config: &{Name:addons-317784 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:addons-317784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 14:39:09.121079  832572 start.go:125] createHost starting for "" (driver="kvm2")
	I1114 14:39:09.122843  832572 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1114 14:39:09.122976  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:39:09.123025  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:39:09.136101  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
	I1114 14:39:09.136557  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:39:09.137545  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:39:09.137579  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:39:09.138719  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:39:09.138940  832572 main.go:141] libmachine: (addons-317784) Calling .GetMachineName
	I1114 14:39:09.139149  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:09.139321  832572 start.go:159] libmachine.API.Create for "addons-317784" (driver="kvm2")
	I1114 14:39:09.139375  832572 client.go:168] LocalClient.Create starting
	I1114 14:39:09.139465  832572 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 14:39:09.197124  832572 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 14:39:09.320157  832572 main.go:141] libmachine: Running pre-create checks...
	I1114 14:39:09.320183  832572 main.go:141] libmachine: (addons-317784) Calling .PreCreateCheck
	I1114 14:39:09.320787  832572 main.go:141] libmachine: (addons-317784) Calling .GetConfigRaw
	I1114 14:39:09.321224  832572 main.go:141] libmachine: Creating machine...
	I1114 14:39:09.321242  832572 main.go:141] libmachine: (addons-317784) Calling .Create
	I1114 14:39:09.321395  832572 main.go:141] libmachine: (addons-317784) Creating KVM machine...
	I1114 14:39:09.322762  832572 main.go:141] libmachine: (addons-317784) DBG | found existing default KVM network
	I1114 14:39:09.323495  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.323343  832594 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1114 14:39:09.329312  832572 main.go:141] libmachine: (addons-317784) DBG | trying to create private KVM network mk-addons-317784 192.168.39.0/24...
	I1114 14:39:09.400884  832572 main.go:141] libmachine: (addons-317784) DBG | private KVM network mk-addons-317784 192.168.39.0/24 created
	I1114 14:39:09.400927  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.400867  832594 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:39:09.400951  832572 main.go:141] libmachine: (addons-317784) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784 ...
	I1114 14:39:09.400969  832572 main.go:141] libmachine: (addons-317784) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 14:39:09.401061  832572 main.go:141] libmachine: (addons-317784) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 14:39:09.632257  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.632134  832594 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa...
	I1114 14:39:09.733804  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.733640  832594 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/addons-317784.rawdisk...
	I1114 14:39:09.733859  832572 main.go:141] libmachine: (addons-317784) DBG | Writing magic tar header
	I1114 14:39:09.733875  832572 main.go:141] libmachine: (addons-317784) DBG | Writing SSH key tar header
	I1114 14:39:09.733885  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:09.733814  832594 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784 ...
	I1114 14:39:09.734045  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784
	I1114 14:39:09.734080  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 14:39:09.734094  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784 (perms=drwx------)
	I1114 14:39:09.734111  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 14:39:09.734127  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 14:39:09.734141  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:39:09.734160  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 14:39:09.734186  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 14:39:09.734196  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 14:39:09.734204  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home/jenkins
	I1114 14:39:09.734215  832572 main.go:141] libmachine: (addons-317784) DBG | Checking permissions on dir: /home
	I1114 14:39:09.734226  832572 main.go:141] libmachine: (addons-317784) DBG | Skipping /home - not owner
	I1114 14:39:09.734236  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 14:39:09.734242  832572 main.go:141] libmachine: (addons-317784) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 14:39:09.734265  832572 main.go:141] libmachine: (addons-317784) Creating domain...
	I1114 14:39:09.735388  832572 main.go:141] libmachine: (addons-317784) define libvirt domain using xml: 
	I1114 14:39:09.735404  832572 main.go:141] libmachine: (addons-317784) <domain type='kvm'>
	I1114 14:39:09.735411  832572 main.go:141] libmachine: (addons-317784)   <name>addons-317784</name>
	I1114 14:39:09.735417  832572 main.go:141] libmachine: (addons-317784)   <memory unit='MiB'>4000</memory>
	I1114 14:39:09.735423  832572 main.go:141] libmachine: (addons-317784)   <vcpu>2</vcpu>
	I1114 14:39:09.735428  832572 main.go:141] libmachine: (addons-317784)   <features>
	I1114 14:39:09.735440  832572 main.go:141] libmachine: (addons-317784)     <acpi/>
	I1114 14:39:09.735481  832572 main.go:141] libmachine: (addons-317784)     <apic/>
	I1114 14:39:09.735497  832572 main.go:141] libmachine: (addons-317784)     <pae/>
	I1114 14:39:09.735503  832572 main.go:141] libmachine: (addons-317784)     
	I1114 14:39:09.735508  832572 main.go:141] libmachine: (addons-317784)   </features>
	I1114 14:39:09.735514  832572 main.go:141] libmachine: (addons-317784)   <cpu mode='host-passthrough'>
	I1114 14:39:09.735519  832572 main.go:141] libmachine: (addons-317784)   
	I1114 14:39:09.735525  832572 main.go:141] libmachine: (addons-317784)   </cpu>
	I1114 14:39:09.735530  832572 main.go:141] libmachine: (addons-317784)   <os>
	I1114 14:39:09.735540  832572 main.go:141] libmachine: (addons-317784)     <type>hvm</type>
	I1114 14:39:09.735549  832572 main.go:141] libmachine: (addons-317784)     <boot dev='cdrom'/>
	I1114 14:39:09.735559  832572 main.go:141] libmachine: (addons-317784)     <boot dev='hd'/>
	I1114 14:39:09.735577  832572 main.go:141] libmachine: (addons-317784)     <bootmenu enable='no'/>
	I1114 14:39:09.735594  832572 main.go:141] libmachine: (addons-317784)   </os>
	I1114 14:39:09.735603  832572 main.go:141] libmachine: (addons-317784)   <devices>
	I1114 14:39:09.735611  832572 main.go:141] libmachine: (addons-317784)     <disk type='file' device='cdrom'>
	I1114 14:39:09.735621  832572 main.go:141] libmachine: (addons-317784)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/boot2docker.iso'/>
	I1114 14:39:09.735631  832572 main.go:141] libmachine: (addons-317784)       <target dev='hdc' bus='scsi'/>
	I1114 14:39:09.735642  832572 main.go:141] libmachine: (addons-317784)       <readonly/>
	I1114 14:39:09.735654  832572 main.go:141] libmachine: (addons-317784)     </disk>
	I1114 14:39:09.735667  832572 main.go:141] libmachine: (addons-317784)     <disk type='file' device='disk'>
	I1114 14:39:09.735683  832572 main.go:141] libmachine: (addons-317784)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 14:39:09.735709  832572 main.go:141] libmachine: (addons-317784)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/addons-317784.rawdisk'/>
	I1114 14:39:09.735721  832572 main.go:141] libmachine: (addons-317784)       <target dev='hda' bus='virtio'/>
	I1114 14:39:09.735727  832572 main.go:141] libmachine: (addons-317784)     </disk>
	I1114 14:39:09.735735  832572 main.go:141] libmachine: (addons-317784)     <interface type='network'>
	I1114 14:39:09.735746  832572 main.go:141] libmachine: (addons-317784)       <source network='mk-addons-317784'/>
	I1114 14:39:09.735760  832572 main.go:141] libmachine: (addons-317784)       <model type='virtio'/>
	I1114 14:39:09.735770  832572 main.go:141] libmachine: (addons-317784)     </interface>
	I1114 14:39:09.735783  832572 main.go:141] libmachine: (addons-317784)     <interface type='network'>
	I1114 14:39:09.735796  832572 main.go:141] libmachine: (addons-317784)       <source network='default'/>
	I1114 14:39:09.735808  832572 main.go:141] libmachine: (addons-317784)       <model type='virtio'/>
	I1114 14:39:09.735837  832572 main.go:141] libmachine: (addons-317784)     </interface>
	I1114 14:39:09.735859  832572 main.go:141] libmachine: (addons-317784)     <serial type='pty'>
	I1114 14:39:09.735877  832572 main.go:141] libmachine: (addons-317784)       <target port='0'/>
	I1114 14:39:09.735887  832572 main.go:141] libmachine: (addons-317784)     </serial>
	I1114 14:39:09.735897  832572 main.go:141] libmachine: (addons-317784)     <console type='pty'>
	I1114 14:39:09.735908  832572 main.go:141] libmachine: (addons-317784)       <target type='serial' port='0'/>
	I1114 14:39:09.735922  832572 main.go:141] libmachine: (addons-317784)     </console>
	I1114 14:39:09.735934  832572 main.go:141] libmachine: (addons-317784)     <rng model='virtio'>
	I1114 14:39:09.735949  832572 main.go:141] libmachine: (addons-317784)       <backend model='random'>/dev/random</backend>
	I1114 14:39:09.735960  832572 main.go:141] libmachine: (addons-317784)     </rng>
	I1114 14:39:09.735973  832572 main.go:141] libmachine: (addons-317784)     
	I1114 14:39:09.735983  832572 main.go:141] libmachine: (addons-317784)     
	I1114 14:39:09.735993  832572 main.go:141] libmachine: (addons-317784)   </devices>
	I1114 14:39:09.736005  832572 main.go:141] libmachine: (addons-317784) </domain>
	I1114 14:39:09.736020  832572 main.go:141] libmachine: (addons-317784) 
	I1114 14:39:09.740496  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:13:d7:49 in network default
	I1114 14:39:09.741231  832572 main.go:141] libmachine: (addons-317784) Ensuring networks are active...
	I1114 14:39:09.741254  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:09.741901  832572 main.go:141] libmachine: (addons-317784) Ensuring network default is active
	I1114 14:39:09.742192  832572 main.go:141] libmachine: (addons-317784) Ensuring network mk-addons-317784 is active
	I1114 14:39:09.742683  832572 main.go:141] libmachine: (addons-317784) Getting domain xml...
	I1114 14:39:09.743391  832572 main.go:141] libmachine: (addons-317784) Creating domain...
	I1114 14:39:10.966555  832572 main.go:141] libmachine: (addons-317784) Waiting to get IP...
	I1114 14:39:10.967271  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:10.967816  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:10.967868  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:10.967797  832594 retry.go:31] will retry after 240.12088ms: waiting for machine to come up
	I1114 14:39:11.209223  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:11.209636  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:11.209675  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:11.209595  832594 retry.go:31] will retry after 309.483531ms: waiting for machine to come up
	I1114 14:39:11.521270  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:11.521697  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:11.521733  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:11.521637  832594 retry.go:31] will retry after 471.628216ms: waiting for machine to come up
	I1114 14:39:11.995203  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:11.995798  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:11.995829  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:11.995751  832594 retry.go:31] will retry after 519.057067ms: waiting for machine to come up
	I1114 14:39:12.516423  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:12.516898  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:12.516932  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:12.516825  832594 retry.go:31] will retry after 718.762554ms: waiting for machine to come up
	I1114 14:39:13.236753  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:13.237201  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:13.237236  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:13.237133  832594 retry.go:31] will retry after 811.725044ms: waiting for machine to come up
	I1114 14:39:14.050163  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:14.050638  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:14.050671  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:14.050577  832594 retry.go:31] will retry after 913.225481ms: waiting for machine to come up
	I1114 14:39:14.965344  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:14.965842  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:14.965875  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:14.965748  832594 retry.go:31] will retry after 999.497751ms: waiting for machine to come up
	I1114 14:39:15.966960  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:15.967359  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:15.967389  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:15.967308  832594 retry.go:31] will retry after 1.790301588s: waiting for machine to come up
	I1114 14:39:17.760304  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:17.760777  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:17.760811  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:17.760705  832594 retry.go:31] will retry after 1.793227337s: waiting for machine to come up
	I1114 14:39:19.556092  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:19.556536  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:19.556570  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:19.556495  832594 retry.go:31] will retry after 2.414609963s: waiting for machine to come up
	I1114 14:39:21.974452  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:21.975013  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:21.975050  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:21.974955  832594 retry.go:31] will retry after 3.059180002s: waiting for machine to come up
	I1114 14:39:25.035634  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:25.036086  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:25.036111  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:25.036043  832594 retry.go:31] will retry after 3.834961778s: waiting for machine to come up
	I1114 14:39:28.876050  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:28.876510  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find current IP address of domain addons-317784 in network mk-addons-317784
	I1114 14:39:28.876534  832572 main.go:141] libmachine: (addons-317784) DBG | I1114 14:39:28.876466  832594 retry.go:31] will retry after 3.579833892s: waiting for machine to come up
	I1114 14:39:32.460168  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.460685  832572 main.go:141] libmachine: (addons-317784) Found IP for machine: 192.168.39.16
	I1114 14:39:32.460711  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has current primary IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.460720  832572 main.go:141] libmachine: (addons-317784) Reserving static IP address...
	I1114 14:39:32.461141  832572 main.go:141] libmachine: (addons-317784) DBG | unable to find host DHCP lease matching {name: "addons-317784", mac: "52:54:00:0f:c8:7d", ip: "192.168.39.16"} in network mk-addons-317784
	I1114 14:39:32.536153  832572 main.go:141] libmachine: (addons-317784) DBG | Getting to WaitForSSH function...
	I1114 14:39:32.536188  832572 main.go:141] libmachine: (addons-317784) Reserved static IP address: 192.168.39.16
	I1114 14:39:32.536203  832572 main.go:141] libmachine: (addons-317784) Waiting for SSH to be available...
	I1114 14:39:32.538829  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.539235  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.539264  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.539438  832572 main.go:141] libmachine: (addons-317784) DBG | Using SSH client type: external
	I1114 14:39:32.539470  832572 main.go:141] libmachine: (addons-317784) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa (-rw-------)
	I1114 14:39:32.539532  832572 main.go:141] libmachine: (addons-317784) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 14:39:32.539565  832572 main.go:141] libmachine: (addons-317784) DBG | About to run SSH command:
	I1114 14:39:32.539580  832572 main.go:141] libmachine: (addons-317784) DBG | exit 0
	I1114 14:39:32.624235  832572 main.go:141] libmachine: (addons-317784) DBG | SSH cmd err, output: <nil>: 
	I1114 14:39:32.624510  832572 main.go:141] libmachine: (addons-317784) KVM machine creation complete!
	I1114 14:39:32.624857  832572 main.go:141] libmachine: (addons-317784) Calling .GetConfigRaw
	I1114 14:39:32.625389  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:32.625671  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:32.625827  832572 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 14:39:32.625842  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:39:32.627221  832572 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 14:39:32.627243  832572 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 14:39:32.627252  832572 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 14:39:32.627261  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:32.629388  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.629756  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.629787  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.629877  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:32.630096  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.630256  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.630477  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:32.630671  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:32.631015  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:32.631027  832572 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 14:39:32.739878  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:39:32.739905  832572 main.go:141] libmachine: Detecting the provisioner...
	I1114 14:39:32.739914  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:32.742635  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.742985  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.743013  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.743281  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:32.743492  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.743690  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.743815  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:32.744017  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:32.744429  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:32.744446  832572 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 14:39:32.853399  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 14:39:32.853507  832572 main.go:141] libmachine: found compatible host: buildroot
	I1114 14:39:32.853519  832572 main.go:141] libmachine: Provisioning with buildroot...
	I1114 14:39:32.853529  832572 main.go:141] libmachine: (addons-317784) Calling .GetMachineName
	I1114 14:39:32.853921  832572 buildroot.go:166] provisioning hostname "addons-317784"
	I1114 14:39:32.853957  832572 main.go:141] libmachine: (addons-317784) Calling .GetMachineName
	I1114 14:39:32.854188  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:32.856942  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.857316  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.857345  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.857497  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:32.857689  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.857833  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.857992  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:32.858148  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:32.858516  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:32.858530  832572 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-317784 && echo "addons-317784" | sudo tee /etc/hostname
	I1114 14:39:32.982771  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-317784
	
	I1114 14:39:32.982803  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:32.985627  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.985977  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:32.986009  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:32.986215  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:32.986410  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.986610  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:32.986756  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:32.986954  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:32.987305  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:32.987330  832572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-317784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-317784/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-317784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:39:33.104338  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:39:33.104381  832572 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 14:39:33.104415  832572 buildroot.go:174] setting up certificates
	I1114 14:39:33.104430  832572 provision.go:83] configureAuth start
	I1114 14:39:33.104450  832572 main.go:141] libmachine: (addons-317784) Calling .GetMachineName
	I1114 14:39:33.104806  832572 main.go:141] libmachine: (addons-317784) Calling .GetIP
	I1114 14:39:33.107688  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.108079  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.108117  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.108223  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.110524  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.110806  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.110834  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.111000  832572 provision.go:138] copyHostCerts
	I1114 14:39:33.111084  832572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 14:39:33.111224  832572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 14:39:33.111285  832572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 14:39:33.111329  832572 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.addons-317784 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube addons-317784]
	I1114 14:39:33.207568  832572 provision.go:172] copyRemoteCerts
	I1114 14:39:33.207622  832572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:39:33.207646  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.210319  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.210741  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.210773  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.210969  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.211169  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.211310  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.211477  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:39:33.293933  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:39:33.314955  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1114 14:39:33.335727  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 14:39:33.356707  832572 provision.go:86] duration metric: configureAuth took 252.258663ms
	I1114 14:39:33.356734  832572 buildroot.go:189] setting minikube options for container-runtime
	I1114 14:39:33.356963  832572 config.go:182] Loaded profile config "addons-317784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:39:33.357055  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.359795  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.360126  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.360152  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.360312  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.360521  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.360669  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.360822  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.360972  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:33.361352  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:33.361382  832572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:39:33.670489  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:39:33.670520  832572 main.go:141] libmachine: Checking connection to Docker...
	I1114 14:39:33.670548  832572 main.go:141] libmachine: (addons-317784) Calling .GetURL
	I1114 14:39:33.671804  832572 main.go:141] libmachine: (addons-317784) DBG | Using libvirt version 6000000
	I1114 14:39:33.673934  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.674288  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.674325  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.674474  832572 main.go:141] libmachine: Docker is up and running!
	I1114 14:39:33.674503  832572 main.go:141] libmachine: Reticulating splines...
	I1114 14:39:33.674514  832572 client.go:171] LocalClient.Create took 24.535124455s
	I1114 14:39:33.674562  832572 start.go:167] duration metric: libmachine.API.Create for "addons-317784" took 24.535243508s
	I1114 14:39:33.674584  832572 start.go:300] post-start starting for "addons-317784" (driver="kvm2")
	I1114 14:39:33.674599  832572 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:39:33.674625  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.674895  832572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:39:33.674920  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.677074  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.677421  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.677448  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.677537  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.677724  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.677888  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.678030  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:39:33.766858  832572 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:39:33.771189  832572 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 14:39:33.771219  832572 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 14:39:33.771284  832572 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 14:39:33.771314  832572 start.go:303] post-start completed in 96.722326ms
	I1114 14:39:33.771363  832572 main.go:141] libmachine: (addons-317784) Calling .GetConfigRaw
	I1114 14:39:33.772065  832572 main.go:141] libmachine: (addons-317784) Calling .GetIP
	I1114 14:39:33.775136  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.775548  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.775583  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.775865  832572 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/config.json ...
	I1114 14:39:33.776034  832572 start.go:128] duration metric: createHost completed in 24.654943759s
	I1114 14:39:33.776059  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.778316  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.778651  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.778698  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.778780  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.778969  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.779136  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.779315  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.779458  832572 main.go:141] libmachine: Using SSH client type: native
	I1114 14:39:33.779834  832572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I1114 14:39:33.779846  832572 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 14:39:33.893471  832572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699972773.873976570
	
	I1114 14:39:33.893503  832572 fix.go:206] guest clock: 1699972773.873976570
	I1114 14:39:33.893513  832572 fix.go:219] Guest: 2023-11-14 14:39:33.87397657 +0000 UTC Remote: 2023-11-14 14:39:33.776046379 +0000 UTC m=+24.772082453 (delta=97.930191ms)
	I1114 14:39:33.893566  832572 fix.go:190] guest clock delta is within tolerance: 97.930191ms
	I1114 14:39:33.893577  832572 start.go:83] releasing machines lock for "addons-317784", held for 24.772577516s
	I1114 14:39:33.893611  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.893956  832572 main.go:141] libmachine: (addons-317784) Calling .GetIP
	I1114 14:39:33.896411  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.896869  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.896901  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.897066  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.897589  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.897761  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:39:33.897851  832572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:39:33.897885  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.898165  832572 ssh_runner.go:195] Run: cat /version.json
	I1114 14:39:33.898194  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:39:33.900960  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.901230  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.901350  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.901378  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.901503  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.901613  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:33.901643  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:33.901669  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.901767  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:39:33.901848  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.901953  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:39:33.901983  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:39:33.902121  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:39:33.902261  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:39:34.008354  832572 ssh_runner.go:195] Run: systemctl --version
	I1114 14:39:34.014133  832572 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 14:39:34.172934  832572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 14:39:34.178768  832572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 14:39:34.178843  832572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:39:34.194373  832572 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:39:34.194400  832572 start.go:472] detecting cgroup driver to use...
	I1114 14:39:34.194468  832572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:39:34.208205  832572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:39:34.221108  832572 docker.go:203] disabling cri-docker service (if available) ...
	I1114 14:39:34.221178  832572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 14:39:34.234144  832572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 14:39:34.247071  832572 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 14:39:34.346956  832572 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 14:39:34.465035  832572 docker.go:219] disabling docker service ...
	I1114 14:39:34.465112  832572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 14:39:34.478789  832572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 14:39:34.490653  832572 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 14:39:34.591474  832572 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 14:39:34.690445  832572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 14:39:34.704413  832572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:39:34.721857  832572 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 14:39:34.721931  832572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:39:34.732055  832572 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 14:39:34.732141  832572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:39:34.742890  832572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:39:34.753224  832572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:39:34.763611  832572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 14:39:34.774398  832572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 14:39:34.783783  832572 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 14:39:34.783843  832572 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 14:39:34.797725  832572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 14:39:34.807353  832572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:39:34.905313  832572 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 14:39:35.350749  832572 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 14:39:35.350861  832572 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 14:39:35.359838  832572 start.go:540] Will wait 60s for crictl version
	I1114 14:39:35.359946  832572 ssh_runner.go:195] Run: which crictl
	I1114 14:39:35.363920  832572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 14:39:35.408965  832572 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 14:39:35.409074  832572 ssh_runner.go:195] Run: crio --version
	I1114 14:39:35.462767  832572 ssh_runner.go:195] Run: crio --version
	I1114 14:39:35.597286  832572 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 14:39:35.660470  832572 main.go:141] libmachine: (addons-317784) Calling .GetIP
	I1114 14:39:35.663541  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:35.663855  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:39:35.663896  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:39:35.664143  832572 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 14:39:35.668802  832572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:39:35.681489  832572 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:39:35.681562  832572 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 14:39:35.716328  832572 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 14:39:35.716414  832572 ssh_runner.go:195] Run: which lz4
	I1114 14:39:35.720308  832572 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 14:39:35.724398  832572 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 14:39:35.724438  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 14:39:37.393100  832572 crio.go:444] Took 1.672847 seconds to copy over tarball
	I1114 14:39:37.393191  832572 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 14:39:40.427675  832572 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.034439248s)
	I1114 14:39:40.427707  832572 crio.go:451] Took 3.034578 seconds to extract the tarball
	I1114 14:39:40.427720  832572 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 14:39:40.471741  832572 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 14:39:40.543099  832572 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 14:39:40.543134  832572 cache_images.go:84] Images are preloaded, skipping loading
	I1114 14:39:40.543215  832572 ssh_runner.go:195] Run: crio config
	I1114 14:39:40.603708  832572 cni.go:84] Creating CNI manager for ""
	I1114 14:39:40.603742  832572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:39:40.603770  832572 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 14:39:40.603829  832572 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-317784 NodeName:addons-317784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 14:39:40.603980  832572 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-317784"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 14:39:40.604068  832572 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-317784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-317784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 14:39:40.604141  832572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 14:39:40.614515  832572 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 14:39:40.614605  832572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 14:39:40.624056  832572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1114 14:39:40.639756  832572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 14:39:40.655596  832572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1114 14:39:40.671574  832572 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I1114 14:39:40.675461  832572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:39:40.687577  832572 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784 for IP: 192.168.39.16
	I1114 14:39:40.687627  832572 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:40.687803  832572 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 14:39:40.831419  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt ...
	I1114 14:39:40.831452  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt: {Name:mk2728f1a821bdf3e5ec632580089d84c6352049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:40.831617  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key ...
	I1114 14:39:40.831628  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key: {Name:mk5a59ca238d6d31d365882787f287599b8d399e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:40.831726  832572 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 14:39:41.013453  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt ...
	I1114 14:39:41.013486  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt: {Name:mk257c9eb23f7fbdaa001814b4fedd5597f62c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.013649  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key ...
	I1114 14:39:41.013660  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key: {Name:mk80a0bfef16c16f5e90197d89766bc78fe11e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.013767  832572 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.key
	I1114 14:39:41.013781  832572 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt with IP's: []
	I1114 14:39:41.351574  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt ...
	I1114 14:39:41.351619  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: {Name:mk6ffe80523732e40b0dbc0fa24ca3f3c47bb6df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.351812  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.key ...
	I1114 14:39:41.351833  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.key: {Name:mke16ac5b9415fb2c28046a3c54ebef7d6735ac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.351929  832572 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key.5918fcb3
	I1114 14:39:41.351950  832572 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt.5918fcb3 with IP's: [192.168.39.16 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 14:39:41.665558  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt.5918fcb3 ...
	I1114 14:39:41.665604  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt.5918fcb3: {Name:mk80228808592cbc215c7a6c53604575d45b4bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.665789  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key.5918fcb3 ...
	I1114 14:39:41.665813  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key.5918fcb3: {Name:mkb8c0c446598f67922cc617138e3a1d13df8389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.665916  832572 certs.go:337] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt.5918fcb3 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt
	I1114 14:39:41.666011  832572 certs.go:341] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key.5918fcb3 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key
	I1114 14:39:41.666072  832572 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.key
	I1114 14:39:41.666094  832572 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.crt with IP's: []
	I1114 14:39:41.709495  832572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.crt ...
	I1114 14:39:41.709533  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.crt: {Name:mk15b77f79a171ab28b594321bab6aa741d4d2b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.709734  832572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.key ...
	I1114 14:39:41.709756  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.key: {Name:mkd742e996dcfdb2f8fc372bf4af5205735a5e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:39:41.710008  832572 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 14:39:41.710061  832572 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 14:39:41.710104  832572 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 14:39:41.710145  832572 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 14:39:41.710915  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 14:39:41.736487  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 14:39:41.763054  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 14:39:41.787250  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 14:39:41.809772  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 14:39:41.832415  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 14:39:41.854754  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 14:39:41.877286  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 14:39:41.899728  832572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 14:39:41.922287  832572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 14:39:41.937592  832572 ssh_runner.go:195] Run: openssl version
	I1114 14:39:41.943055  832572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 14:39:41.952905  832572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:39:41.957341  832572 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:39:41.957401  832572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:39:41.962715  832572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 14:39:41.972731  832572 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 14:39:41.976749  832572 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:39:41.976805  832572 kubeadm.go:404] StartCluster: {Name:addons-317784 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.3 ClusterName:addons-317784 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:39:41.976887  832572 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 14:39:41.976934  832572 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 14:39:42.013381  832572 cri.go:89] found id: ""
	I1114 14:39:42.013476  832572 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 14:39:42.023048  832572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 14:39:42.032155  832572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 14:39:42.041692  832572 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 14:39:42.041749  832572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 14:39:42.096674  832572 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 14:39:42.096806  832572 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 14:39:42.210879  832572 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 14:39:42.211021  832572 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 14:39:42.211166  832572 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 14:39:42.440989  832572 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 14:39:42.539472  832572 out.go:204]   - Generating certificates and keys ...
	I1114 14:39:42.539602  832572 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 14:39:42.539721  832572 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 14:39:42.617374  832572 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 14:39:42.951029  832572 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 14:39:43.143486  832572 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 14:39:43.571083  832572 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 14:39:43.649832  832572 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 14:39:43.650014  832572 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-317784 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I1114 14:39:43.787785  832572 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 14:39:43.788002  832572 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-317784 localhost] and IPs [192.168.39.16 127.0.0.1 ::1]
	I1114 14:39:44.289217  832572 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 14:39:44.786892  832572 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 14:39:45.056305  832572 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 14:39:45.056634  832572 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 14:39:45.789276  832572 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 14:39:45.903297  832572 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 14:39:46.078471  832572 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 14:39:46.160831  832572 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 14:39:46.161585  832572 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 14:39:46.165968  832572 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 14:39:46.168014  832572 out.go:204]   - Booting up control plane ...
	I1114 14:39:46.168155  832572 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 14:39:46.168286  832572 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 14:39:46.168399  832572 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 14:39:46.184689  832572 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:39:46.185577  832572 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:39:46.185737  832572 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 14:39:46.302491  832572 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 14:39:53.304300  832572 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002823 seconds
	I1114 14:39:53.304493  832572 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 14:39:53.323852  832572 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 14:39:53.856050  832572 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 14:39:53.856416  832572 kubeadm.go:322] [mark-control-plane] Marking the node addons-317784 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 14:39:54.370361  832572 kubeadm.go:322] [bootstrap-token] Using token: tt6miv.sn7glg7rnalzqd3u
	I1114 14:39:54.371827  832572 out.go:204]   - Configuring RBAC rules ...
	I1114 14:39:54.371977  832572 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 14:39:54.378021  832572 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 14:39:54.390001  832572 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 14:39:54.394109  832572 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 14:39:54.397729  832572 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 14:39:54.401766  832572 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 14:39:54.419163  832572 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 14:39:54.654128  832572 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 14:39:54.784521  832572 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 14:39:54.785034  832572 kubeadm.go:322] 
	I1114 14:39:54.785148  832572 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 14:39:54.785173  832572 kubeadm.go:322] 
	I1114 14:39:54.785269  832572 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 14:39:54.785281  832572 kubeadm.go:322] 
	I1114 14:39:54.785304  832572 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 14:39:54.785368  832572 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 14:39:54.785435  832572 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 14:39:54.785446  832572 kubeadm.go:322] 
	I1114 14:39:54.785522  832572 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 14:39:54.785531  832572 kubeadm.go:322] 
	I1114 14:39:54.785589  832572 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 14:39:54.785595  832572 kubeadm.go:322] 
	I1114 14:39:54.785669  832572 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 14:39:54.785768  832572 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 14:39:54.785864  832572 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 14:39:54.785871  832572 kubeadm.go:322] 
	I1114 14:39:54.786015  832572 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 14:39:54.786130  832572 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 14:39:54.786154  832572 kubeadm.go:322] 
	I1114 14:39:54.786250  832572 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tt6miv.sn7glg7rnalzqd3u \
	I1114 14:39:54.786378  832572 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 14:39:54.786418  832572 kubeadm.go:322] 	--control-plane 
	I1114 14:39:54.786433  832572 kubeadm.go:322] 
	I1114 14:39:54.786552  832572 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 14:39:54.786560  832572 kubeadm.go:322] 
	I1114 14:39:54.786665  832572 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tt6miv.sn7glg7rnalzqd3u \
	I1114 14:39:54.786804  832572 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 14:39:54.787006  832572 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 14:39:54.787039  832572 cni.go:84] Creating CNI manager for ""
	I1114 14:39:54.787050  832572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:39:54.788842  832572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 14:39:54.790299  832572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 14:39:54.808066  832572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 14:39:54.836187  832572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 14:39:54.836317  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:54.836335  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=addons-317784 minikube.k8s.io/updated_at=2023_11_14T14_39_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:54.894440  832572 ops.go:34] apiserver oom_adj: -16
	I1114 14:39:55.067822  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:55.163043  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:55.752440  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:56.252146  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:56.752263  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:57.252455  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:57.752592  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:58.252570  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:58.751909  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:59.252307  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:39:59.752588  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:00.252065  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:00.752424  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:01.252538  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:01.751755  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:02.252355  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:02.752426  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:03.252165  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:03.752223  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:04.252703  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:04.752842  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:05.252282  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:05.752368  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:06.252115  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:06.752281  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:07.252672  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:07.752574  832572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:40:07.863962  832572 kubeadm.go:1081] duration metric: took 13.027720147s to wait for elevateKubeSystemPrivileges.
	I1114 14:40:07.864030  832572 kubeadm.go:406] StartCluster complete in 25.88723075s
	I1114 14:40:07.864057  832572 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:40:07.864198  832572 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:40:07.864596  832572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:40:07.864892  832572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 14:40:07.864898  832572 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1114 14:40:07.864989  832572 addons.go:69] Setting helm-tiller=true in profile "addons-317784"
	I1114 14:40:07.865000  832572 addons.go:69] Setting ingress=true in profile "addons-317784"
	I1114 14:40:07.865002  832572 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-317784"
	I1114 14:40:07.865020  832572 addons.go:231] Setting addon helm-tiller=true in "addons-317784"
	I1114 14:40:07.865021  832572 addons.go:69] Setting default-storageclass=true in profile "addons-317784"
	I1114 14:40:07.865047  832572 addons.go:69] Setting metrics-server=true in profile "addons-317784"
	I1114 14:40:07.865062  832572 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-317784"
	I1114 14:40:07.865060  832572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-317784"
	I1114 14:40:07.865070  832572 addons.go:231] Setting addon metrics-server=true in "addons-317784"
	I1114 14:40:07.865091  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865106  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865112  832572 addons.go:69] Setting cloud-spanner=true in profile "addons-317784"
	I1114 14:40:07.865123  832572 addons.go:231] Setting addon cloud-spanner=true in "addons-317784"
	I1114 14:40:07.865131  832572 config.go:182] Loaded profile config "addons-317784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:40:07.865147  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865032  832572 addons.go:231] Setting addon ingress=true in "addons-317784"
	I1114 14:40:07.865106  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865221  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865506  832572 addons.go:69] Setting gcp-auth=true in profile "addons-317784"
	I1114 14:40:07.865534  832572 mustload.go:65] Loading cluster: addons-317784
	I1114 14:40:07.865548  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865557  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865578  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865583  832572 addons.go:69] Setting registry=true in profile "addons-317784"
	I1114 14:40:07.864990  832572 addons.go:69] Setting volumesnapshots=true in profile "addons-317784"
	I1114 14:40:07.865594  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865596  832572 addons.go:231] Setting addon registry=true in "addons-317784"
	I1114 14:40:07.865600  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865604  832572 addons.go:231] Setting addon volumesnapshots=true in "addons-317784"
	I1114 14:40:07.865611  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865629  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865037  832572 addons.go:69] Setting ingress-dns=true in profile "addons-317784"
	I1114 14:40:07.865643  832572 addons.go:231] Setting addon ingress-dns=true in "addons-317784"
	I1114 14:40:07.865644  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.865042  832572 addons.go:69] Setting inspektor-gadget=true in profile "addons-317784"
	I1114 14:40:07.865661  832572 addons.go:231] Setting addon inspektor-gadget=true in "addons-317784"
	I1114 14:40:07.865663  832572 addons.go:69] Setting storage-provisioner=true in profile "addons-317784"
	I1114 14:40:07.865671  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865675  832572 addons.go:231] Setting addon storage-provisioner=true in "addons-317784"
	I1114 14:40:07.865714  832572 config.go:182] Loaded profile config "addons-317784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:40:07.865723  832572 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-317784"
	I1114 14:40:07.865735  832572 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-317784"
	I1114 14:40:07.865956  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865963  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865986  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865984  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865990  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866016  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865579  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866041  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.865630  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866050  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866075  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.865718  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.866107  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.866294  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.866394  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866394  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866408  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866411  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866418  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866430  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866044  832572 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-317784"
	I1114 14:40:07.866491  832572 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-317784"
	I1114 14:40:07.866535  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.866759  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866824  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.866899  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.866941  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.885791  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I1114 14:40:07.885807  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
	I1114 14:40:07.885788  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I1114 14:40:07.885944  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I1114 14:40:07.886311  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.886433  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.886507  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.886575  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.886915  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.886933  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.887056  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.887067  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.887078  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.887081  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.887270  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.887289  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.887505  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.887529  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.887549  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.887735  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.888051  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.888089  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.888105  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.888141  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.888542  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.892855  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.894818  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44257
	I1114 14:40:07.895350  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I1114 14:40:07.901063  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39965
	I1114 14:40:07.901299  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.901353  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.901651  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.901776  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.901895  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.901946  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.902241  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.902259  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.902332  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.902348  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.902686  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.902980  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.903198  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.903331  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.903345  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.904108  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.905150  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.905189  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.906493  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.907635  832572 addons.go:231] Setting addon default-storageclass=true in "addons-317784"
	I1114 14:40:07.907685  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.908100  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.908133  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.908710  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.908751  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.921170  832572 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-317784" context rescaled to 1 replicas
	I1114 14:40:07.921225  832572 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 14:40:07.923197  832572 out.go:177] * Verifying Kubernetes components...
	I1114 14:40:07.924836  832572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:40:07.936443  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I1114 14:40:07.937199  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.937939  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.937966  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.938418  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.939046  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.939096  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.939313  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33165
	I1114 14:40:07.939446  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I1114 14:40:07.940021  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.940720  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.940754  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.940818  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I1114 14:40:07.941172  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I1114 14:40:07.941351  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.941423  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.941599  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I1114 14:40:07.941765  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.941831  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.942273  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.942291  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.942352  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I1114 14:40:07.942498  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.943089  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.943252  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I1114 14:40:07.943790  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.943827  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.944084  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.944192  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.944337  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.944350  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.944433  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43743
	I1114 14:40:07.944516  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.944574  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.944596  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.944903  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.944921  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.946676  832572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 14:40:07.945337  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.945388  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.945413  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.945486  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.946075  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.946200  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.946882  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I1114 14:40:07.947518  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I1114 14:40:07.948360  832572 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:40:07.948374  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 14:40:07.948399  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.948455  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.948524  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.948582  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.948808  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.949414  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.949448  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.949474  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.949491  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.949812  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.949831  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.950152  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.950329  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.950361  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.950574  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.950629  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.950734  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.950926  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.951336  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.951348  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.951356  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.951393  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.951412  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.951704  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.951795  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.951825  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.952243  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.952350  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.952385  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.952504  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.952526  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.952678  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1114 14:40:07.952870  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.952903  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.953060  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.953149  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.953228  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.953670  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.954214  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.954228  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.954507  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.954635  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.954695  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.956704  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1114 14:40:07.955311  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.956394  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.958153  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1114 14:40:07.958167  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1114 14:40:07.958186  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.960527  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1114 14:40:07.959410  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.963348  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1114 14:40:07.962360  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.962373  832572 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1114 14:40:07.963097  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.966118  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1114 14:40:07.967008  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I1114 14:40:07.968953  832572 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1114 14:40:07.968982  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1114 14:40:07.969005  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.964782  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.969073  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.964983  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.964728  832572 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1114 14:40:07.969641  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I1114 14:40:07.967467  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1114 14:40:07.968180  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.967095  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I1114 14:40:07.969944  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.970921  832572 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 14:40:07.970482  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I1114 14:40:07.971230  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.971568  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.971857  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.971874  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.972854  832572 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 14:40:07.972878  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.973481  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.975428  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1114 14:40:07.973523  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.974296  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.974367  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.974940  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.974992  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.975050  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.977335  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.977416  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.978213  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.978555  832572 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 14:40:07.978578  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.978649  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.978812  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.979637  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1114 14:40:07.979780  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.979803  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.979845  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1114 14:40:07.980005  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.980037  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.980322  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.981211  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I1114 14:40:07.982183  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1114 14:40:07.983652  832572 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1114 14:40:07.982226  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.981951  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I1114 14:40:07.982841  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.985124  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1114 14:40:07.985145  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1114 14:40:07.985164  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.982872  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.982972  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.983272  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.984268  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.984292  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.984311  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.984345  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36135
	I1114 14:40:07.985271  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.987233  832572 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1114 14:40:07.985924  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.986482  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.986537  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:07.987670  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.987685  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.988346  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.988810  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.988828  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.988855  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.988880  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.988969  832572 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1114 14:40:07.988986  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1114 14:40:07.989005  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.989621  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.991206  832572 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1114 14:40:07.991235  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.992814  832572 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1114 14:40:07.992828  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1114 14:40:07.990091  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:07.992831  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.990120  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.992846  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.992853  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:07.990536  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.992877  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.990546  832572 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-317784"
	I1114 14:40:07.992901  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.992929  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:07.990927  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.989686  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.993240  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:07.993284  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.993317  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.993354  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:07.993375  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:07.993573  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.993624  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:07.993664  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.994242  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:07.994749  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.996527  832572 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1114 14:40:07.995445  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.995627  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:07.995756  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.997081  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:07.997226  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.997857  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:07.997880  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.997974  832572 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1114 14:40:07.997984  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1114 14:40:07.997999  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:07.998046  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:07.999876  832572 out.go:177]   - Using image docker.io/registry:2.8.3
	I1114 14:40:07.998861  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:07.999091  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:07.999611  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:08.001195  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.002802  832572 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1114 14:40:08.001522  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.001534  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.001768  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.001899  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.004468  832572 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1114 14:40:08.004553  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.004669  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.006221  832572 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1114 14:40:08.007741  832572 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1114 14:40:08.007759  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1114 14:40:08.007779  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.006331  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I1114 14:40:08.006399  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.006416  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1114 14:40:08.007932  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.006608  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.006634  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.006937  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I1114 14:40:08.008463  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.008776  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:08.008885  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.009461  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:08.009624  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:08.009653  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:08.010028  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:08.010048  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:08.010114  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:08.010373  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:08.010810  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:08.011016  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:08.012657  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.013189  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:08.013277  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.013310  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:08.013365  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.013530  832572 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 14:40:08.015115  832572 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1114 14:40:08.013543  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 14:40:08.013574  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.014050  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.014745  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.016582  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.016638  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.016668  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.016686  832572 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 14:40:08.016695  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 14:40:08.016708  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.016836  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.016904  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.017119  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.017478  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.017646  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.019303  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44135
	I1114 14:40:08.019747  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:08.020221  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:08.020241  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:08.020807  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.020849  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:08.021180  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.021383  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:08.021422  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:08.021504  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.021649  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.021677  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.021704  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.021716  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.021915  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.021956  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.022101  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.022131  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.022239  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.022292  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.022378  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.022394  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:08.036758  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I1114 14:40:08.037201  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:08.037696  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:08.037717  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:08.038075  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:08.038286  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:08.040033  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:08.042121  832572 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1114 14:40:08.043619  832572 out.go:177]   - Using image docker.io/busybox:stable
	I1114 14:40:08.045193  832572 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1114 14:40:08.045211  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1114 14:40:08.045228  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:08.048641  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.049281  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:08.049407  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:08.049408  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:08.049691  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:08.049904  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:08.050112  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	W1114 14:40:08.051312  832572 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54622->192.168.39.16:22: read: connection reset by peer
	I1114 14:40:08.051343  832572 retry.go:31] will retry after 362.443206ms: ssh: handshake failed: read tcp 192.168.39.1:54622->192.168.39.16:22: read: connection reset by peer
	I1114 14:40:08.207409  832572 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1114 14:40:08.207432  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1114 14:40:08.219970  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:40:08.236680  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1114 14:40:08.250193  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1114 14:40:08.250228  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1114 14:40:08.254790  832572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 14:40:08.255454  832572 node_ready.go:35] waiting up to 6m0s for node "addons-317784" to be "Ready" ...
	I1114 14:40:08.309093  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1114 14:40:08.323090  832572 node_ready.go:49] node "addons-317784" has status "Ready":"True"
	I1114 14:40:08.323130  832572 node_ready.go:38] duration metric: took 67.638062ms waiting for node "addons-317784" to be "Ready" ...
	I1114 14:40:08.323145  832572 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:40:08.336832  832572 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1114 14:40:08.336864  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1114 14:40:08.337120  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 14:40:08.344269  832572 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:08.356781  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1114 14:40:08.368608  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 14:40:08.381574  832572 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1114 14:40:08.381602  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1114 14:40:08.386287  832572 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1114 14:40:08.386315  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1114 14:40:08.400478  832572 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1114 14:40:08.400499  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1114 14:40:08.449578  832572 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 14:40:08.449610  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1114 14:40:08.450787  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1114 14:40:08.450811  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1114 14:40:08.460580  832572 pod_ready.go:92] pod "etcd-addons-317784" in "kube-system" namespace has status "Ready":"True"
	I1114 14:40:08.460606  832572 pod_ready.go:81] duration metric: took 116.311095ms waiting for pod "etcd-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:08.460622  832572 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:08.514149  832572 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1114 14:40:08.514194  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1114 14:40:08.578865  832572 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1114 14:40:08.578892  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1114 14:40:08.592836  832572 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1114 14:40:08.592865  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1114 14:40:08.611006  832572 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1114 14:40:08.611039  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1114 14:40:08.636195  832572 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 14:40:08.636226  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 14:40:08.640801  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1114 14:40:08.640830  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1114 14:40:08.712210  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1114 14:40:08.770436  832572 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1114 14:40:08.770468  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1114 14:40:08.855705  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1114 14:40:08.870150  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1114 14:40:08.870197  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1114 14:40:08.884490  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1114 14:40:08.884522  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1114 14:40:08.888761  832572 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 14:40:08.888787  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 14:40:08.900243  832572 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1114 14:40:08.900265  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1114 14:40:08.910542  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1114 14:40:08.972222  832572 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1114 14:40:08.972258  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1114 14:40:09.010923  832572 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 14:40:09.010954  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1114 14:40:09.040630  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 14:40:09.057362  832572 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1114 14:40:09.057392  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1114 14:40:09.073246  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1114 14:40:09.073270  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1114 14:40:09.123550  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 14:40:09.139397  832572 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1114 14:40:09.139447  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1114 14:40:09.145153  832572 pod_ready.go:92] pod "kube-apiserver-addons-317784" in "kube-system" namespace has status "Ready":"True"
	I1114 14:40:09.145181  832572 pod_ready.go:81] duration metric: took 684.546143ms waiting for pod "kube-apiserver-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.145197  832572 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.166443  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1114 14:40:09.166478  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1114 14:40:09.236378  832572 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1114 14:40:09.236410  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1114 14:40:09.270330  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1114 14:40:09.270362  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1114 14:40:09.304405  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1114 14:40:09.354255  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1114 14:40:09.354285  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1114 14:40:09.403173  832572 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1114 14:40:09.403210  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1114 14:40:09.439331  832572 pod_ready.go:92] pod "kube-controller-manager-addons-317784" in "kube-system" namespace has status "Ready":"True"
	I1114 14:40:09.439369  832572 pod_ready.go:81] duration metric: took 294.162071ms waiting for pod "kube-controller-manager-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.439384  832572 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.451154  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1114 14:40:09.611012  832572 pod_ready.go:92] pod "kube-scheduler-addons-317784" in "kube-system" namespace has status "Ready":"True"
	I1114 14:40:09.611038  832572 pod_ready.go:81] duration metric: took 171.64584ms waiting for pod "kube-scheduler-addons-317784" in "kube-system" namespace to be "Ready" ...
	I1114 14:40:09.611047  832572 pod_ready.go:38] duration metric: took 1.287888103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:40:09.611064  832572 api_server.go:52] waiting for apiserver process to appear ...
	I1114 14:40:09.611141  832572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:40:15.450188  832572 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1114 14:40:15.450247  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:15.453819  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:15.454375  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:15.454411  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:15.454550  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:15.454778  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:15.455013  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:15.455201  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:15.682704  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.445983471s)
	I1114 14:40:15.682758  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.462751331s)
	I1114 14:40:15.682802  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.682827  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.682851  832572 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.428024683s)
	I1114 14:40:15.682886  832572 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 14:40:15.682764  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.682932  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.682934  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.373806413s)
	I1114 14:40:15.682991  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.683008  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.683287  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.683303  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.683313  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.683321  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.683746  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.683766  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.683765  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.683778  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.683775  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:15.683782  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.683787  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.683796  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:15.683803  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:15.686148  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:15.686171  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.686190  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.686194  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:15.686203  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.686220  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.686226  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:15.686234  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:15.686174  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:15.855516  832572 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1114 14:40:15.906684  832572 addons.go:231] Setting addon gcp-auth=true in "addons-317784"
	I1114 14:40:15.906766  832572 host.go:66] Checking if "addons-317784" exists ...
	I1114 14:40:15.907266  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:15.907316  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:15.923755  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I1114 14:40:15.924285  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:15.924799  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:15.924827  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:15.925263  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:15.925735  832572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:40:15.925768  832572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:40:15.940272  832572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35045
	I1114 14:40:15.940727  832572 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:40:15.941248  832572 main.go:141] libmachine: Using API Version  1
	I1114 14:40:15.941263  832572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:40:15.941634  832572 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:40:15.941800  832572 main.go:141] libmachine: (addons-317784) Calling .GetState
	I1114 14:40:15.943387  832572 main.go:141] libmachine: (addons-317784) Calling .DriverName
	I1114 14:40:15.943647  832572 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1114 14:40:15.943672  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHHostname
	I1114 14:40:15.946785  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:15.947258  832572 main.go:141] libmachine: (addons-317784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:c8:7d", ip: ""} in network mk-addons-317784: {Iface:virbr1 ExpiryTime:2023-11-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:0f:c8:7d Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:addons-317784 Clientid:01:52:54:00:0f:c8:7d}
	I1114 14:40:15.947280  832572 main.go:141] libmachine: (addons-317784) DBG | domain addons-317784 has defined IP address 192.168.39.16 and MAC address 52:54:00:0f:c8:7d in network mk-addons-317784
	I1114 14:40:15.947459  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHPort
	I1114 14:40:15.947695  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHKeyPath
	I1114 14:40:15.947862  832572 main.go:141] libmachine: (addons-317784) Calling .GetSSHUsername
	I1114 14:40:15.948046  832572 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/addons-317784/id_rsa Username:docker}
	I1114 14:40:17.113431  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.776257149s)
	I1114 14:40:17.113473  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.744838651s)
	I1114 14:40:17.113511  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.113522  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.113530  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.113540  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.113431  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.756602225s)
	I1114 14:40:17.113593  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.113645  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.113667  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.401419823s)
	I1114 14:40:17.113697  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.113716  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.113769  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.258016647s)
	I1114 14:40:17.113804  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.20323607s)
	I1114 14:40:17.114069  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.114085  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.114104  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.114088  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.114171  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.073512184s)
	I1114 14:40:17.114202  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.114213  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.114296  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.990686239s)
	W1114 14:40:17.114341  832572 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1114 14:40:17.114366  832572 retry.go:31] will retry after 311.125362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1114 14:40:17.114447  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.810005785s)
	I1114 14:40:17.114466  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.114476  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115048  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115070  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115080  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115091  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115102  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115107  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115115  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115124  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.115126  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115129  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115138  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115147  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115158  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115167  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115176  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115185  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115187  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115195  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115206  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115218  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115234  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115243  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115252  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115264  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115272  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115281  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115288  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.115301  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.115310  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115319  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.115327  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.116491  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.116508  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.116520  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.116529  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.116697  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.116725  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.116733  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.116990  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.117022  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.117032  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.117091  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.117124  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.117132  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.117141  832572 addons.go:467] Verifying addon metrics-server=true in "addons-317784"
	I1114 14:40:17.117489  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.117514  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.117542  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.117551  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.115177  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.117791  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.117856  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.118013  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.118024  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.118082  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.118092  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.118100  832572 addons.go:467] Verifying addon registry=true in "addons-317784"
	I1114 14:40:17.120042  832572 out.go:177] * Verifying registry addon...
	I1114 14:40:17.120046  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.120062  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.121681  832572 addons.go:467] Verifying addon ingress=true in "addons-317784"
	I1114 14:40:17.118623  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.123235  832572 out.go:177] * Verifying ingress addon...
	I1114 14:40:17.121763  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.120176  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.118595  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.122456  832572 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1114 14:40:17.126205  832572 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1114 14:40:17.154370  832572 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1114 14:40:17.154399  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:17.154729  832572 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1114 14:40:17.154756  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:17.171341  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.171364  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.171662  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.171679  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	W1114 14:40:17.171773  832572 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1114 14:40:17.182753  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.182777  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.183058  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.183078  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.192454  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:17.192548  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:17.426230  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 14:40:17.716628  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.265400607s)
	I1114 14:40:17.716652  832572 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.105483581s)
	I1114 14:40:17.716686  832572 api_server.go:72] duration metric: took 9.795386004s to wait for apiserver process to appear ...
	I1114 14:40:17.716694  832572 api_server.go:88] waiting for apiserver healthz status ...
	I1114 14:40:17.716698  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.716714  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.716716  832572 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I1114 14:40:17.716802  832572 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.773131259s)
	I1114 14:40:17.718710  832572 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 14:40:17.717139  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.717168  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.719959  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.721145  832572 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1114 14:40:17.719986  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:17.722422  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:17.722510  832572 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1114 14:40:17.722534  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1114 14:40:17.722692  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:17.722713  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:17.722733  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:17.722755  832572 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-317784"
	I1114 14:40:17.724210  832572 out.go:177] * Verifying csi-hostpath-driver addon...
	I1114 14:40:17.726496  832572 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1114 14:40:17.799332  832572 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I1114 14:40:17.801243  832572 api_server.go:141] control plane version: v1.28.3
	I1114 14:40:17.801278  832572 api_server.go:131] duration metric: took 84.576298ms to wait for apiserver health ...
	I1114 14:40:17.801291  832572 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 14:40:17.869942  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:17.892833  832572 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1114 14:40:17.892866  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1114 14:40:17.928603  832572 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1114 14:40:17.928636  832572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1114 14:40:17.956617  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:17.975923  832572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1114 14:40:18.011537  832572 system_pods.go:59] 18 kube-system pods found
	I1114 14:40:18.011588  832572 system_pods.go:61] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:18.011598  832572 system_pods.go:61] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending
	I1114 14:40:18.011607  832572 system_pods.go:61] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending
	I1114 14:40:18.011613  832572 system_pods.go:61] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending
	I1114 14:40:18.011621  832572 system_pods.go:61] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:18.011628  832572 system_pods.go:61] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:18.011636  832572 system_pods.go:61] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:18.011647  832572 system_pods.go:61] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:18.011664  832572 system_pods.go:61] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:18.011679  832572 system_pods.go:61] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:18.011695  832572 system_pods.go:61] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:18.011709  832572 system_pods.go:61] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:18.011721  832572 system_pods.go:61] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:18.011734  832572 system_pods.go:61] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:18.011748  832572 system_pods.go:61] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.011763  832572 system_pods.go:61] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.011776  832572 system_pods.go:61] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:18.011789  832572 system_pods.go:61] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:18.011803  832572 system_pods.go:74] duration metric: took 210.503856ms to wait for pod list to return data ...
	I1114 14:40:18.011819  832572 default_sa.go:34] waiting for default service account to be created ...
	I1114 14:40:18.043289  832572 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1114 14:40:18.043314  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:18.058440  832572 default_sa.go:45] found service account: "default"
	I1114 14:40:18.058468  832572 default_sa.go:55] duration metric: took 46.637967ms for default service account to be created ...
	I1114 14:40:18.058478  832572 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 14:40:18.144281  832572 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1114 14:40:18.144320  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:18.181405  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:18.181437  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:18.181446  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:18.181457  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending
	I1114 14:40:18.181463  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:18.181470  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:18.181478  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:18.181482  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:18.181488  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:18.181498  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:18.181506  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:18.181542  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:18.181562  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:18.181575  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:18.181585  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:18.181597  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.181615  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.181631  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:18.181644  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:18.181667  832572 retry.go:31] will retry after 245.404881ms: missing components: kube-proxy
	I1114 14:40:18.208426  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:18.208568  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:18.452278  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:18.452315  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:18.452323  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:18.452331  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:18.452344  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:18.452352  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:18.452357  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:18.452361  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:18.452369  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:18.452374  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:18.452378  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:18.452385  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:18.452393  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:18.452402  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:18.452408  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:18.452416  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.452423  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.452431  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:18.452439  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:18.452456  832572 retry.go:31] will retry after 244.568454ms: missing components: kube-proxy
	I1114 14:40:18.720508  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:18.734144  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:18.764367  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:18.811250  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:18.811299  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:18.811314  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:18.811328  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:18.811340  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:18.811360  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:18.811369  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:18.811381  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:18.811397  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:18.811411  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:18.811425  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:18.811440  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:18.811455  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:18.811465  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:18.811479  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:18.811493  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.811510  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:18.811524  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:18.811537  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:18.811564  832572 retry.go:31] will retry after 461.869894ms: missing components: kube-proxy
	I1114 14:40:19.177523  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:19.239442  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:19.261472  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:19.286302  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:19.286338  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:19.286348  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:19.286356  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:19.286363  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:19.286368  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:19.286373  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:19.286378  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:19.286385  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:19.286392  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:19.286399  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:19.286405  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:19.286412  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:19.286420  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:19.286428  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:19.286435  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:19.286444  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:19.286449  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:19.286455  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:19.286471  832572 retry.go:31] will retry after 592.745152ms: missing components: kube-proxy
	I1114 14:40:19.650621  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:19.696916  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:19.700046  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:19.899301  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:19.899339  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:19.899348  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:19.899357  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:19.899363  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:19.899368  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:19.899380  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:19.899386  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:19.899394  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:19.899402  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:19.899414  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:19.899423  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:19.899451  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:19.899458  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:19.899467  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:19.899475  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:19.899485  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:19.899492  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:19.899502  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:19.899525  832572 retry.go:31] will retry after 743.897155ms: missing components: kube-proxy
	I1114 14:40:20.169340  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:20.249633  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:20.251330  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:20.479080  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.05279354s)
	I1114 14:40:20.479134  832572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.503177148s)
	I1114 14:40:20.479147  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:20.479164  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:20.479176  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:20.479193  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:20.479532  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:20.479550  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:20.479566  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:20.479574  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:20.479774  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:20.479802  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:20.479939  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:20.479950  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:20.480026  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:20.480041  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:20.480054  832572 main.go:141] libmachine: Making call to close driver server
	I1114 14:40:20.480063  832572 main.go:141] libmachine: (addons-317784) Calling .Close
	I1114 14:40:20.480839  832572 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:40:20.480857  832572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:40:20.480839  832572 main.go:141] libmachine: (addons-317784) DBG | Closing plugin on server side
	I1114 14:40:20.482919  832572 addons.go:467] Verifying addon gcp-auth=true in "addons-317784"
	I1114 14:40:20.484729  832572 out.go:177] * Verifying gcp-auth addon...
	I1114 14:40:20.487111  832572 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1114 14:40:20.491132  832572 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1114 14:40:20.491147  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:20.494794  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:20.657446  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:20.657481  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:20.657489  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:20.657498  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:20.657506  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:20.657512  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:20.657516  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:20.657521  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:20.657528  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:20.657534  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:20.657543  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:20.657554  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:20.657564  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:20.657572  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:20.657581  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:20.657588  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:20.657598  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:20.657607  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:20.657616  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:20.657631  832572 retry.go:31] will retry after 593.375754ms: missing components: kube-proxy
	I1114 14:40:20.661445  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:20.703175  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:20.705934  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:20.999265  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:21.153102  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:21.201920  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:21.203388  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:21.263889  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:21.263924  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:21.263934  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:21.263945  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:21.263954  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:21.263959  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:21.263964  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:21.263968  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:21.263974  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:21.263979  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:21.263984  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:21.263992  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:21.263999  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:21.264008  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:21.264014  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:21.264021  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:21.264028  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:21.264033  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:21.264043  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:21.264064  832572 retry.go:31] will retry after 1.176167498s: missing components: kube-proxy
	I1114 14:40:21.499055  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:21.657025  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:21.698437  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:21.702862  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:22.003216  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:22.151843  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:22.203675  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:22.203833  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:22.450065  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:22.450101  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:22.450110  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:22.450119  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:22.450129  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:22.450134  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:22.450139  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:22.450145  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:22.450151  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:22.450158  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:22.450163  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:22.450169  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:22.450178  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:22.450184  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:22.450193  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:22.450200  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:22.450210  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:22.450217  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:22.450226  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:22.450258  832572 retry.go:31] will retry after 1.018281819s: missing components: kube-proxy
	I1114 14:40:22.500492  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:22.650620  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:22.697588  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:22.699250  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:23.004254  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:23.150899  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:23.199121  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:23.199700  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:23.478567  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:23.478608  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:23.478620  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:23.478635  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:23.478642  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:23.478647  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:23.478652  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:23.478656  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:23.478668  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:23.478677  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:40:23.478682  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:23.478688  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:23.478694  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:23.478700  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:23.478707  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:23.478716  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:23.478725  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:23.478732  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:23.478737  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:23.478754  832572 retry.go:31] will retry after 1.491059492s: missing components: kube-proxy
	I1114 14:40:23.499123  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:23.650221  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:23.698660  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:23.698706  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:24.002974  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:24.153501  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:24.198679  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:24.199307  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:24.501020  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:24.652363  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:24.707731  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:24.717333  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:24.985390  832572 system_pods.go:86] 18 kube-system pods found
	I1114 14:40:24.985426  832572 system_pods.go:89] "coredns-5dd5756b68-97twm" [24724bed-9f9e-4ce6-b359-dd22bf06d4a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:40:24.985434  832572 system_pods.go:89] "csi-hostpath-attacher-0" [7ed567ba-0020-4621-bada-2a846f0f47a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 14:40:24.985493  832572 system_pods.go:89] "csi-hostpath-resizer-0" [07e1487b-0aca-47f1-94c6-c98baaf75535] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 14:40:24.985499  832572 system_pods.go:89] "csi-hostpathplugin-z6dqk" [42e7b085-9279-42c4-90f9-6feff2ec6f1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 14:40:24.985505  832572 system_pods.go:89] "etcd-addons-317784" [64885225-f2db-4177-a3aa-463cfa2e439e] Running
	I1114 14:40:24.985509  832572 system_pods.go:89] "kube-apiserver-addons-317784" [797afc54-feb2-4494-bce3-fa826586e734] Running
	I1114 14:40:24.985514  832572 system_pods.go:89] "kube-controller-manager-addons-317784" [479c6a17-83a9-4301-a305-bc87882e2404] Running
	I1114 14:40:24.985523  832572 system_pods.go:89] "kube-ingress-dns-minikube" [db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 14:40:24.985529  832572 system_pods.go:89] "kube-proxy-5jq48" [b4bff1d5-3968-493a-b332-d360861a5698] Running
	I1114 14:40:24.985534  832572 system_pods.go:89] "kube-scheduler-addons-317784" [11aeeb47-0679-4136-9b42-4a3a0cac272f] Running
	I1114 14:40:24.985540  832572 system_pods.go:89] "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:40:24.985547  832572 system_pods.go:89] "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 14:40:24.985557  832572 system_pods.go:89] "registry-frqvq" [4e840532-ea34-4155-9e28-d372f730759d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1114 14:40:24.985565  832572 system_pods.go:89] "registry-proxy-kh6p9" [a19bf641-561e-4422-b35c-1732be0e252d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 14:40:24.985577  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-7t6pq" [fbb464b0-5361-435a-888d-ae86a377888d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:24.985583  832572 system_pods.go:89] "snapshot-controller-58dbcc7b99-zdcmh" [ea8ad365-92c4-44cf-86e7-a36669bf2673] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 14:40:24.985589  832572 system_pods.go:89] "storage-provisioner" [5780cfad-2795-49b4-bb74-d70d6bd20e4a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:40:24.985594  832572 system_pods.go:89] "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1114 14:40:24.985600  832572 system_pods.go:126] duration metric: took 6.9271181s to wait for k8s-apps to be running ...
	I1114 14:40:24.985608  832572 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:40:24.985657  832572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:40:25.029221  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:25.050212  832572 system_svc.go:56] duration metric: took 64.588954ms WaitForService to wait for kubelet.
	I1114 14:40:25.050245  832572 kubeadm.go:581] duration metric: took 17.128945338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:40:25.050267  832572 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:40:25.057185  832572 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:40:25.057219  832572 node_conditions.go:123] node cpu capacity is 2
	I1114 14:40:25.057231  832572 node_conditions.go:105] duration metric: took 6.960226ms to run NodePressure ...
	I1114 14:40:25.057244  832572 start.go:228] waiting for startup goroutines ...
	I1114 14:40:25.159787  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:25.206036  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:25.207160  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:25.499307  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:25.652057  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:25.702844  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:25.707702  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:26.029985  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:26.161062  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:26.204531  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:26.205375  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:26.513078  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:26.656857  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:26.707604  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:26.707894  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:27.001636  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:27.155381  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:27.202593  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:27.202925  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:27.507633  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:27.659898  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:27.698822  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:27.699542  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:27.998896  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:28.152507  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:28.208411  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:28.210696  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:28.502760  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:28.660164  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:28.702749  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:28.704650  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:29.004038  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:29.169789  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:29.209226  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:29.209794  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:29.498710  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:29.650606  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:29.699769  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:29.700035  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:30.002817  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:30.151663  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:30.198769  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:30.199476  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:30.499172  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:30.651907  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:30.700674  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:30.701099  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:31.001241  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:31.151326  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:31.200565  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:31.200923  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:31.500232  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:31.650373  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:31.696942  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:31.701820  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:31.999672  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:32.149624  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:32.196993  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:32.198243  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:32.498728  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:32.656845  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:32.699693  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:32.700664  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:33.003516  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:33.152629  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:33.203063  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:33.203734  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:33.507431  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:33.658547  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:33.701275  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:33.702140  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:33.998744  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:34.150724  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:34.199240  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:34.200932  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:34.502459  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:34.654916  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:34.699239  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:34.701021  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:35.000186  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:35.158158  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:35.198937  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:35.200233  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:35.499522  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:35.652097  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:35.699624  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:35.700043  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:35.998954  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:36.150237  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:36.199401  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:36.200214  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:36.499137  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:36.651645  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:36.699338  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:36.699619  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:36.999613  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:37.150852  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:37.198857  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:37.199900  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:37.499110  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:37.666060  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:37.699187  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:37.700835  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:37.999740  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:38.153046  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:38.199370  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:38.199502  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:38.500502  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:38.650703  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:38.698267  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:38.699273  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:38.998806  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:39.151367  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:39.198304  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:39.198878  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:39.499554  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:39.656261  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:39.700078  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:39.700418  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:40.000821  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:40.151182  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:40.198527  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:40.199293  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:40.498633  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:40.650032  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:40.699302  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:40.703633  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:40.999414  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:41.154140  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:41.201140  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:41.201411  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:41.502086  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:41.651069  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:41.699219  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:41.701050  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:41.999424  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:42.152676  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:42.197558  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:42.198221  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:42.498954  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:42.690073  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:42.715702  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:42.719942  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:43.012944  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:43.173249  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:43.209800  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:43.214157  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:43.499569  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:43.656672  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:43.698191  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:43.699740  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:44.010559  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:44.151152  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:44.202112  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:44.202500  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:44.499132  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:44.650774  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:44.697651  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:44.700849  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:45.005585  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:45.150511  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:45.196936  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:45.199331  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:45.499529  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:45.652181  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:45.698352  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:45.698459  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:45.999480  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:46.150788  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:46.198444  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:46.198454  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:46.504080  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:46.667659  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:46.699590  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:46.699987  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:47.000009  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:47.152012  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:47.198214  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:47.198250  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:47.498752  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:47.651262  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:47.699880  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:47.701628  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:48.000298  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:48.151167  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:48.198526  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:48.200112  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:48.499187  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:48.651179  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:48.698615  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:48.699928  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:49.004809  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:49.156038  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:49.199853  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:49.201445  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:49.500837  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:49.650623  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:49.701139  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:49.701279  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:50.018902  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:50.150121  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:50.198970  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:50.200210  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:50.501865  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:50.650285  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:50.702798  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:50.702966  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:50.999331  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:51.151582  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:51.198492  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:51.198726  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:51.498757  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:51.650441  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:51.701774  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:51.706071  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:52.000210  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:52.150758  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:52.198049  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:52.198143  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:52.499122  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:52.651193  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:52.699132  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:52.700469  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:52.999517  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:53.150478  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:53.203283  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:53.203377  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:53.500714  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:53.651920  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:53.698152  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:53.700380  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:54.000013  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:54.150770  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:54.199777  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:54.204106  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:54.499498  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:54.652091  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:54.698522  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:54.698849  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:54.999800  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:55.150766  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:55.198078  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:55.198340  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:55.499172  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:55.660034  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:55.698014  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:55.699441  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:56.004499  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:56.150707  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:56.197538  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:56.199703  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:56.514398  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:56.660578  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:56.698136  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:56.698327  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:57.006366  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:57.157611  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:57.200472  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:57.202794  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:57.499109  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:57.652941  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:57.699302  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:57.699831  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:57.999138  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:58.150409  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:58.198251  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:58.198389  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:58.501099  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:58.652147  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:58.701560  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:58.702418  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:58.999226  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:59.160876  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:59.200008  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:59.201524  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:59.513057  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:40:59.652217  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:40:59.698806  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:40:59.699369  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:40:59.999463  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:00.151566  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:00.199918  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:00.200346  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:00.499119  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:00.653386  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:00.699799  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:00.704598  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:00.998567  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:01.151939  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:01.199498  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:01.200116  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:01.499914  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:01.649857  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:01.698019  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:01.699890  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:02.000001  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:02.150378  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:02.196939  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:02.198340  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:02.501233  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:02.653500  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:02.698580  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:02.698671  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:02.999810  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:03.151004  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:03.199634  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:03.200790  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:03.499376  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:03.651625  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:03.707584  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:03.708179  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:03.999773  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:04.150919  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:04.197541  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:04.198128  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:04.500465  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:04.650799  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:04.699149  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:04.700539  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:05.001767  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:05.150608  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:05.199196  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:05.201050  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:05.499484  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:05.652415  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:05.698926  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:05.700647  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:06.296223  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:06.296378  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:06.298417  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:06.311896  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:06.499021  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:06.652972  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:06.703872  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:06.705099  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:06.999264  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:07.150340  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:07.200580  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:07.200854  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:07.499589  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:07.650253  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:07.698589  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:07.699438  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:07.999856  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:08.158390  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:08.204650  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:08.221266  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:08.499636  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:08.651960  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:08.698636  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:08.701198  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:08.999789  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:09.153015  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:09.198524  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:09.200076  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:09.499922  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:09.650886  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:09.699358  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:09.699444  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:10.240041  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:10.243108  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:10.244222  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:10.249716  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:10.499525  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:10.650548  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:10.700317  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:10.700809  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:10.998803  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:11.151336  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:11.197521  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:11.198984  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:11.498851  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:11.651031  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:11.699121  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:11.699729  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:11.999551  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:12.164122  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:12.208326  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:12.208785  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:12.499034  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:12.653004  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:12.700380  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:12.701778  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:13.000210  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:13.151865  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:13.198767  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:13.200589  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:13.500217  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:13.663691  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:13.707752  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:13.708060  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:13.998873  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:14.157216  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:14.198193  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:14.198802  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 14:41:14.499241  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:14.651319  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:14.697235  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:14.698412  832572 kapi.go:107] duration metric: took 57.575954852s to wait for kubernetes.io/minikube-addons=registry ...
	I1114 14:41:15.004436  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:15.152143  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:15.197933  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:15.499228  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:15.651192  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:15.698605  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:15.999445  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:16.151227  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:16.197916  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:16.498842  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:16.650208  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:16.698166  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:16.999835  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:17.150987  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:17.202111  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:17.499494  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:17.651668  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:17.699225  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:18.156594  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:18.158649  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:18.196976  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:18.500080  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:18.653718  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:18.697389  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:19.002377  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:19.166150  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:19.201683  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:19.507269  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:19.658448  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:19.701567  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:20.000870  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:20.151129  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:20.199608  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:20.499228  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:20.651391  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:20.697659  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:20.998439  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:21.150837  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:21.197993  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:21.500288  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:21.660593  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:21.697975  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:21.999392  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:22.151746  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:22.197813  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:22.499026  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:22.654322  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:22.700725  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:22.999601  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:23.156990  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:23.197308  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:23.503478  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:23.650455  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:23.697466  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:23.999428  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:24.149779  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:24.199873  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:24.499325  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:24.651695  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:24.697973  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:24.999188  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:25.150393  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 14:41:25.197798  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:25.499029  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:25.651226  832572 kapi.go:107] duration metric: took 1m7.924725548s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1114 14:41:25.700308  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:25.999785  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:26.197715  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:26.499502  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:27.015486  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:27.016124  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:27.198437  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:27.501861  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:27.697106  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:27.999060  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:28.199587  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:28.499099  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:28.698127  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:28.999323  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:29.198354  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:29.499908  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:29.699608  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:29.999440  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:30.197280  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:30.639276  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:30.697832  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:31.001524  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:31.198514  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:31.499557  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:31.699707  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:32.000306  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:32.198524  832572 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 14:41:32.504642  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:32.698797  832572 kapi.go:107] duration metric: took 1m15.572587961s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1114 14:41:32.999706  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:33.500527  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:33.999631  832572 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 14:41:34.500590  832572 kapi.go:107] duration metric: took 1m14.013473792s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1114 14:41:34.502201  832572 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-317784 cluster.
	I1114 14:41:34.503736  832572 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1114 14:41:34.505337  832572 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1114 14:41:34.506837  832572 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, helm-tiller, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1114 14:41:34.508136  832572 addons.go:502] enable addons completed in 1m26.643238104s: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin metrics-server helm-tiller inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1114 14:41:34.508187  832572 start.go:233] waiting for cluster config update ...
	I1114 14:41:34.508206  832572 start.go:242] writing updated cluster config ...
	I1114 14:41:34.508516  832572 ssh_runner.go:195] Run: rm -f paused
	I1114 14:41:34.562255  832572 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 14:41:34.563794  832572 out.go:177] * Done! kubectl is now configured to use "addons-317784" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 14:39:21 UTC, ends at Tue 2023-11-14 14:44:16 UTC. --
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.146637679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699973056146611280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:528627,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=aec6f7ca-5a40-4782-8c6a-1f8d552e7229 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.151319343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=71f5d393-8681-4cf9-88d8-0e3ce477c4c5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.151417578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=71f5d393-8681-4cf9-88d8-0e3ce477c4c5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.152055867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a8c78db288af17f92723c6dd1b2b0e2de8bfa0544bc2c9519928cf5c91fe0e5,PodSandboxId:958d89b08a140690a4302405e64fe0ce6fb44fda4ec7a3f12802124c4b0d6cf9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699973047604232465,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-tx9zc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8320aa79-9eb4-4015-b228-a9fea284894e,},Annotations:map[string]string{io.kubernetes.container.hash: 63094ead,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a240297c919b798f7ed75e51be62f4350a3a0fd1e85d0d646d812c78db09d4,PodSandboxId:5c25d3f760ba9ec455817f0c3155d269a328a9df8ac8ed4bff9de3913d6f6f31,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1699972912435922597,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-lx8bp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f98e26b0-53b8-407a-9f98-712a0310b50a,},An
notations:map[string]string{io.kubernetes.container.hash: f93dc9b6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2806223cfd3a51a702cceab7f77047f86069498aa59c79e9231e882f780430,PodSandboxId:fbc912217deb20d88956a4c0ee7780f80c1493be2fc96b31f4b269ed9b99a0ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699972907481534968,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 1305e1fa-41d3-4ccb-9590-a5da7f844175,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed09ef1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0,PodSandboxId:0b4739dda66af14458e9b7d8702dad5b58ebc11da3eae89b73d1f3861f18cff3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699972893944780241,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-fr8lj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0897976d-190b-43cb-886b-5711767f4b5c,},Annotations:map[string]string{io.kubernetes.container.hash: 23c8a73e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d63d2948c7de44c783714f4b136ee7d1fc7493dc29a091054a8d06edb9962e,PodSandboxId:1463081c9b3aec76a481fe360b9b14e5112d8f972f5a4dbcc123ae8ed9c6f6f8,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16999728
78356942249,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cxw9h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fc761ba-8d27-4b75-86ac-042563877790,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea11d62,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1427ec2aeb0037721c590469d1e293f6799878744c80ce8ef2cde7f203e4918,PodSandboxId:edd550a1a7f4d5bc13c6653d7eb7ba151b7a0b9937d220f198617bd4581203cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972873538233706,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mp8tp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7bcda67e-c991-49a0-9a5a-7123473c3d67,},Annotations:map[string]string{io.kubernetes.container.hash: 560bc442,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28,PodSandboxId:3b11d4daeaa4f4b8430dd6c39f07cba8c0f5553f396d8d9edece87939ee805db,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:19
65e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1699972873239895353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kh6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19bf641-561e-4422-b35c-1732be0e252d,},Annotations:map[string]string{io.kubernetes.container.hash: 975d5bc2,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7,PodSandboxId:8066931a75328a65077b53ed36d39e1e9633d10ccbfca158327d96e410bde4b3,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:386cdae4ba70c368b780a6
e54251a14d300281a3d147a18ef08ae6fb079d150c,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c,State:CONTAINER_RUNNING,CreatedAt:1699972868105284049,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-frqvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e840532-ea34-4155-9e28-d372f730759d,},Annotations:map[string]string{io.kubernetes.container.hash: b18b8f4f,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:
1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699972855737067781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0
,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699972823393746785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653,PodSandboxId:45e1b5dfd57fb6a82633547a42908a1ca3b2260ab9800eb09a1cc5f549a01510,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&I
mageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699972822763333368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5jq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bff1d5-3968-493a-b332-d360861a5698,},Annotations:map[string]string{io.kubernetes.container.hash: 49d9aeb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4,PodSandboxId:b64c334306ed07fbcebfa42abe0acf9bf23f241844ecce0d652f8fefb6c8f08c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b4
6093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699972814159004307,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-97twm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24724bed-9f9e-4ce6-b359-dd22bf06d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: cb0ddfca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f9b0cc72b7becbdf494fe2748caed70a4e53672c51
3c7b0ff2fe2eb2e4fb02,PodSandboxId:f35540a56b98cf09c5906b2080b4af1c8ce4a5e5465fc9a58140a6d7476bf191,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699972788074196332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b55d3601f9ab50d0fccd5e81d0057b,},Annotations:map[string]string{io.kubernetes.container.hash: bdb6ecd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf,PodSandboxId:cadfaa6eb606036899440
9f96fa9fd872f0f084b9b24ea81d3bdeaa027896cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699972788124380935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 835de15b6e6cd8d1adf2d3d351772b5f,},Annotations:map[string]string{io.kubernetes.container.hash: d88cb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887,PodSandboxId:a3cb989518dfa9522097ea174fff2ad7af956b
bc8d87eece8731c6958e4bb24d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699972787960388189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b942e929c440df9df70fd6ab79e131a8,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539,PodSandboxId:f27f11921c2c30278
97ee1fbd58db7f8d3029fb857c4ed25cd7d6a95747fc5d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699972787637653355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 083080137e96a65385e00b26b78226ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=71f5d393-8681-4cf9-88d8-0e3ce477c4c5 name=/runtime.v1.RuntimeService/ListConta
iners
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.194622471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=de8f36aa-e8df-46e1-8af0-5d6854fac68f name=/runtime.v1.RuntimeService/Version
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.194679436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=de8f36aa-e8df-46e1-8af0-5d6854fac68f name=/runtime.v1.RuntimeService/Version
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.195914463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=015de82f-7189-4db5-988e-add2f885d6c3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.197317906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699973056197298696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:528627,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=015de82f-7189-4db5-988e-add2f885d6c3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.198073230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d6b1f4af-772d-41f6-9126-0112bcb380f1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.198212527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d6b1f4af-772d-41f6-9126-0112bcb380f1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.198612949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a8c78db288af17f92723c6dd1b2b0e2de8bfa0544bc2c9519928cf5c91fe0e5,PodSandboxId:958d89b08a140690a4302405e64fe0ce6fb44fda4ec7a3f12802124c4b0d6cf9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699973047604232465,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-tx9zc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8320aa79-9eb4-4015-b228-a9fea284894e,},Annotations:map[string]string{io.kubernetes.container.hash: 63094ead,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a240297c919b798f7ed75e51be62f4350a3a0fd1e85d0d646d812c78db09d4,PodSandboxId:5c25d3f760ba9ec455817f0c3155d269a328a9df8ac8ed4bff9de3913d6f6f31,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1699972912435922597,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-lx8bp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f98e26b0-53b8-407a-9f98-712a0310b50a,},An
notations:map[string]string{io.kubernetes.container.hash: f93dc9b6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2806223cfd3a51a702cceab7f77047f86069498aa59c79e9231e882f780430,PodSandboxId:fbc912217deb20d88956a4c0ee7780f80c1493be2fc96b31f4b269ed9b99a0ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699972907481534968,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 1305e1fa-41d3-4ccb-9590-a5da7f844175,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed09ef1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0,PodSandboxId:0b4739dda66af14458e9b7d8702dad5b58ebc11da3eae89b73d1f3861f18cff3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699972893944780241,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-fr8lj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0897976d-190b-43cb-886b-5711767f4b5c,},Annotations:map[string]string{io.kubernetes.container.hash: 23c8a73e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d63d2948c7de44c783714f4b136ee7d1fc7493dc29a091054a8d06edb9962e,PodSandboxId:1463081c9b3aec76a481fe360b9b14e5112d8f972f5a4dbcc123ae8ed9c6f6f8,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16999728
78356942249,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cxw9h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fc761ba-8d27-4b75-86ac-042563877790,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea11d62,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1427ec2aeb0037721c590469d1e293f6799878744c80ce8ef2cde7f203e4918,PodSandboxId:edd550a1a7f4d5bc13c6653d7eb7ba151b7a0b9937d220f198617bd4581203cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972873538233706,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mp8tp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7bcda67e-c991-49a0-9a5a-7123473c3d67,},Annotations:map[string]string{io.kubernetes.container.hash: 560bc442,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28,PodSandboxId:3b11d4daeaa4f4b8430dd6c39f07cba8c0f5553f396d8d9edece87939ee805db,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:19
65e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1699972873239895353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kh6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19bf641-561e-4422-b35c-1732be0e252d,},Annotations:map[string]string{io.kubernetes.container.hash: 975d5bc2,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7,PodSandboxId:8066931a75328a65077b53ed36d39e1e9633d10ccbfca158327d96e410bde4b3,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:386cdae4ba70c368b780a6
e54251a14d300281a3d147a18ef08ae6fb079d150c,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c,State:CONTAINER_RUNNING,CreatedAt:1699972868105284049,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-frqvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e840532-ea34-4155-9e28-d372f730759d,},Annotations:map[string]string{io.kubernetes.container.hash: b18b8f4f,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:
1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699972855737067781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0
,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699972823393746785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653,PodSandboxId:45e1b5dfd57fb6a82633547a42908a1ca3b2260ab9800eb09a1cc5f549a01510,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&I
mageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699972822763333368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5jq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bff1d5-3968-493a-b332-d360861a5698,},Annotations:map[string]string{io.kubernetes.container.hash: 49d9aeb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4,PodSandboxId:b64c334306ed07fbcebfa42abe0acf9bf23f241844ecce0d652f8fefb6c8f08c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b4
6093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699972814159004307,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-97twm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24724bed-9f9e-4ce6-b359-dd22bf06d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: cb0ddfca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f9b0cc72b7becbdf494fe2748caed70a4e53672c51
3c7b0ff2fe2eb2e4fb02,PodSandboxId:f35540a56b98cf09c5906b2080b4af1c8ce4a5e5465fc9a58140a6d7476bf191,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699972788074196332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b55d3601f9ab50d0fccd5e81d0057b,},Annotations:map[string]string{io.kubernetes.container.hash: bdb6ecd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf,PodSandboxId:cadfaa6eb606036899440
9f96fa9fd872f0f084b9b24ea81d3bdeaa027896cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699972788124380935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 835de15b6e6cd8d1adf2d3d351772b5f,},Annotations:map[string]string{io.kubernetes.container.hash: d88cb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887,PodSandboxId:a3cb989518dfa9522097ea174fff2ad7af956b
bc8d87eece8731c6958e4bb24d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699972787960388189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b942e929c440df9df70fd6ab79e131a8,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539,PodSandboxId:f27f11921c2c30278
97ee1fbd58db7f8d3029fb857c4ed25cd7d6a95747fc5d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699972787637653355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 083080137e96a65385e00b26b78226ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d6b1f4af-772d-41f6-9126-0112bcb380f1 name=/runtime.v1.RuntimeService/ListConta
iners
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.237873565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1767d5fe-270f-49b7-ad24-74833c81c7d7 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.237955107Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1767d5fe-270f-49b7-ad24-74833c81c7d7 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.238955769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=263cb9a8-1a1f-466f-947f-209628320821 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.240347754Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699973056240329942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:528627,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=263cb9a8-1a1f-466f-947f-209628320821 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.240842137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb93a248-84a8-4d87-b982-51d720d7e4be name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.240921457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb93a248-84a8-4d87-b982-51d720d7e4be name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.241379312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a8c78db288af17f92723c6dd1b2b0e2de8bfa0544bc2c9519928cf5c91fe0e5,PodSandboxId:958d89b08a140690a4302405e64fe0ce6fb44fda4ec7a3f12802124c4b0d6cf9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699973047604232465,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-tx9zc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8320aa79-9eb4-4015-b228-a9fea284894e,},Annotations:map[string]string{io.kubernetes.container.hash: 63094ead,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a240297c919b798f7ed75e51be62f4350a3a0fd1e85d0d646d812c78db09d4,PodSandboxId:5c25d3f760ba9ec455817f0c3155d269a328a9df8ac8ed4bff9de3913d6f6f31,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1699972912435922597,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-lx8bp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f98e26b0-53b8-407a-9f98-712a0310b50a,},An
notations:map[string]string{io.kubernetes.container.hash: f93dc9b6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2806223cfd3a51a702cceab7f77047f86069498aa59c79e9231e882f780430,PodSandboxId:fbc912217deb20d88956a4c0ee7780f80c1493be2fc96b31f4b269ed9b99a0ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699972907481534968,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 1305e1fa-41d3-4ccb-9590-a5da7f844175,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed09ef1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0,PodSandboxId:0b4739dda66af14458e9b7d8702dad5b58ebc11da3eae89b73d1f3861f18cff3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699972893944780241,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-fr8lj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0897976d-190b-43cb-886b-5711767f4b5c,},Annotations:map[string]string{io.kubernetes.container.hash: 23c8a73e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d63d2948c7de44c783714f4b136ee7d1fc7493dc29a091054a8d06edb9962e,PodSandboxId:1463081c9b3aec76a481fe360b9b14e5112d8f972f5a4dbcc123ae8ed9c6f6f8,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16999728
78356942249,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cxw9h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fc761ba-8d27-4b75-86ac-042563877790,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea11d62,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1427ec2aeb0037721c590469d1e293f6799878744c80ce8ef2cde7f203e4918,PodSandboxId:edd550a1a7f4d5bc13c6653d7eb7ba151b7a0b9937d220f198617bd4581203cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972873538233706,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mp8tp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7bcda67e-c991-49a0-9a5a-7123473c3d67,},Annotations:map[string]string{io.kubernetes.container.hash: 560bc442,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28,PodSandboxId:3b11d4daeaa4f4b8430dd6c39f07cba8c0f5553f396d8d9edece87939ee805db,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:19
65e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1699972873239895353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kh6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19bf641-561e-4422-b35c-1732be0e252d,},Annotations:map[string]string{io.kubernetes.container.hash: 975d5bc2,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7,PodSandboxId:8066931a75328a65077b53ed36d39e1e9633d10ccbfca158327d96e410bde4b3,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:386cdae4ba70c368b780a6
e54251a14d300281a3d147a18ef08ae6fb079d150c,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c,State:CONTAINER_RUNNING,CreatedAt:1699972868105284049,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-frqvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e840532-ea34-4155-9e28-d372f730759d,},Annotations:map[string]string{io.kubernetes.container.hash: b18b8f4f,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:
1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699972855737067781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0
,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699972823393746785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653,PodSandboxId:45e1b5dfd57fb6a82633547a42908a1ca3b2260ab9800eb09a1cc5f549a01510,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&I
mageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699972822763333368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5jq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bff1d5-3968-493a-b332-d360861a5698,},Annotations:map[string]string{io.kubernetes.container.hash: 49d9aeb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4,PodSandboxId:b64c334306ed07fbcebfa42abe0acf9bf23f241844ecce0d652f8fefb6c8f08c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b4
6093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699972814159004307,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-97twm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24724bed-9f9e-4ce6-b359-dd22bf06d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: cb0ddfca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f9b0cc72b7becbdf494fe2748caed70a4e53672c51
3c7b0ff2fe2eb2e4fb02,PodSandboxId:f35540a56b98cf09c5906b2080b4af1c8ce4a5e5465fc9a58140a6d7476bf191,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699972788074196332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b55d3601f9ab50d0fccd5e81d0057b,},Annotations:map[string]string{io.kubernetes.container.hash: bdb6ecd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf,PodSandboxId:cadfaa6eb606036899440
9f96fa9fd872f0f084b9b24ea81d3bdeaa027896cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699972788124380935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 835de15b6e6cd8d1adf2d3d351772b5f,},Annotations:map[string]string{io.kubernetes.container.hash: d88cb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887,PodSandboxId:a3cb989518dfa9522097ea174fff2ad7af956b
bc8d87eece8731c6958e4bb24d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699972787960388189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b942e929c440df9df70fd6ab79e131a8,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539,PodSandboxId:f27f11921c2c30278
97ee1fbd58db7f8d3029fb857c4ed25cd7d6a95747fc5d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699972787637653355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 083080137e96a65385e00b26b78226ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb93a248-84a8-4d87-b982-51d720d7e4be name=/runtime.v1.RuntimeService/ListConta
iners
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.293006340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eb0a602b-93ab-486e-8854-0fdf91137861 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.293063948Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eb0a602b-93ab-486e-8854-0fdf91137861 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.294411683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d6b4b2c1-11d4-43d2-8884-63db2c6e7f83 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.295633679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699973056295615321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:528627,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=d6b4b2c1-11d4-43d2-8884-63db2c6e7f83 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.296532910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bace099c-83e2-4a99-8b29-d8011de2111e name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.296609620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bace099c-83e2-4a99-8b29-d8011de2111e name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:44:16 addons-317784 crio[714]: time="2023-11-14 14:44:16.297335320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a8c78db288af17f92723c6dd1b2b0e2de8bfa0544bc2c9519928cf5c91fe0e5,PodSandboxId:958d89b08a140690a4302405e64fe0ce6fb44fda4ec7a3f12802124c4b0d6cf9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699973047604232465,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-tx9zc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8320aa79-9eb4-4015-b228-a9fea284894e,},Annotations:map[string]string{io.kubernetes.container.hash: 63094ead,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a240297c919b798f7ed75e51be62f4350a3a0fd1e85d0d646d812c78db09d4,PodSandboxId:5c25d3f760ba9ec455817f0c3155d269a328a9df8ac8ed4bff9de3913d6f6f31,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1699972912435922597,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-lx8bp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f98e26b0-53b8-407a-9f98-712a0310b50a,},An
notations:map[string]string{io.kubernetes.container.hash: f93dc9b6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2806223cfd3a51a702cceab7f77047f86069498aa59c79e9231e882f780430,PodSandboxId:fbc912217deb20d88956a4c0ee7780f80c1493be2fc96b31f4b269ed9b99a0ad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699972907481534968,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 1305e1fa-41d3-4ccb-9590-a5da7f844175,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed09ef1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0,PodSandboxId:0b4739dda66af14458e9b7d8702dad5b58ebc11da3eae89b73d1f3861f18cff3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699972893944780241,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-fr8lj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0897976d-190b-43cb-886b-5711767f4b5c,},Annotations:map[string]string{io.kubernetes.container.hash: 23c8a73e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d63d2948c7de44c783714f4b136ee7d1fc7493dc29a091054a8d06edb9962e,PodSandboxId:1463081c9b3aec76a481fe360b9b14e5112d8f972f5a4dbcc123ae8ed9c6f6f8,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16999728
78356942249,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cxw9h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fc761ba-8d27-4b75-86ac-042563877790,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea11d62,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1427ec2aeb0037721c590469d1e293f6799878744c80ce8ef2cde7f203e4918,PodSandboxId:edd550a1a7f4d5bc13c6653d7eb7ba151b7a0b9937d220f198617bd4581203cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1699972873538233706,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mp8tp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7bcda67e-c991-49a0-9a5a-7123473c3d67,},Annotations:map[string]string{io.kubernetes.container.hash: 560bc442,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18faf6d4d568a06cc147761fc102762320f5f7acc6f1e3ed37e5be296e886d28,PodSandboxId:3b11d4daeaa4f4b8430dd6c39f07cba8c0f5553f396d8d9edece87939ee805db,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:19
65e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1699972873239895353,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-kh6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a19bf641-561e-4422-b35c-1732be0e252d,},Annotations:map[string]string{io.kubernetes.container.hash: 975d5bc2,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394f9ede26bc749e3080b73e3a368152a85e31b11fab976e5170e1afe607bfc7,PodSandboxId:8066931a75328a65077b53ed36d39e1e9633d10ccbfca158327d96e410bde4b3,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:386cdae4ba70c368b780a6
e54251a14d300281a3d147a18ef08ae6fb079d150c,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c,State:CONTAINER_RUNNING,CreatedAt:1699972868105284049,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-frqvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e840532-ea34-4155-9e28-d372f730759d,},Annotations:map[string]string{io.kubernetes.container.hash: b18b8f4f,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:
1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699972855737067781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db,PodSandboxId:e1e9062a537fcaa3c4f614b1b75d899872e4d59d94d5f3c073f41cb207a9623f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0
,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699972823393746785,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5780cfad-2795-49b4-bb74-d70d6bd20e4a,},Annotations:map[string]string{io.kubernetes.container.hash: c7d1534d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653,PodSandboxId:45e1b5dfd57fb6a82633547a42908a1ca3b2260ab9800eb09a1cc5f549a01510,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&I
mageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699972822763333368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5jq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bff1d5-3968-493a-b332-d360861a5698,},Annotations:map[string]string{io.kubernetes.container.hash: 49d9aeb8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4,PodSandboxId:b64c334306ed07fbcebfa42abe0acf9bf23f241844ecce0d652f8fefb6c8f08c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b4
6093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699972814159004307,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-97twm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24724bed-9f9e-4ce6-b359-dd22bf06d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: cb0ddfca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f9b0cc72b7becbdf494fe2748caed70a4e53672c51
3c7b0ff2fe2eb2e4fb02,PodSandboxId:f35540a56b98cf09c5906b2080b4af1c8ce4a5e5465fc9a58140a6d7476bf191,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699972788074196332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b55d3601f9ab50d0fccd5e81d0057b,},Annotations:map[string]string{io.kubernetes.container.hash: bdb6ecd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf,PodSandboxId:cadfaa6eb606036899440
9f96fa9fd872f0f084b9b24ea81d3bdeaa027896cf7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699972788124380935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 835de15b6e6cd8d1adf2d3d351772b5f,},Annotations:map[string]string{io.kubernetes.container.hash: d88cb9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887,PodSandboxId:a3cb989518dfa9522097ea174fff2ad7af956b
bc8d87eece8731c6958e4bb24d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699972787960388189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b942e929c440df9df70fd6ab79e131a8,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539,PodSandboxId:f27f11921c2c30278
97ee1fbd58db7f8d3029fb857c4ed25cd7d6a95747fc5d4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699972787637653355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-317784,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 083080137e96a65385e00b26b78226ae,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bace099c-83e2-4a99-8b29-d8011de2111e name=/runtime.v1.RuntimeService/ListConta
iners
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7a8c78db288af       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   958d89b08a140       hello-world-app-5d77478584-tx9zc
	f9a240297c919       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   5c25d3f760ba9       headlamp-777fd4b855-lx8bp
	ab2806223cfd3       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   fbc912217deb2       nginx
	b6a01da282f10       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   0b4739dda66af       gcp-auth-d4c87556c-fr8lj
	87d63d2948c7d       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             2 minutes ago       Exited              patch                     3                   1463081c9b3ae       ingress-nginx-admission-patch-cxw9h
	f1427ec2aeb00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   edd550a1a7f4d       ingress-nginx-admission-create-mp8tp
	18faf6d4d568a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5              3 minutes ago       Running             registry-proxy            0                   3b11d4daeaa4f       registry-proxy-kh6p9
	394f9ede26bc7       docker.io/library/registry@sha256:386cdae4ba70c368b780a6e54251a14d300281a3d147a18ef08ae6fb079d150c                           3 minutes ago       Running             registry                  0                   8066931a75328       registry-frqvq
	226be02a2e442       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   e1e9062a537fc       storage-provisioner
	14755bac67833       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Exited              storage-provisioner       0                   e1e9062a537fc       storage-provisioner
	cea1861be3ae8       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             3 minutes ago       Running             kube-proxy                0                   45e1b5dfd57fb       kube-proxy-5jq48
	09b803467d9c5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   b64c334306ed0       coredns-5dd5756b68-97twm
	ba4a05a0c7a22       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             4 minutes ago       Running             kube-apiserver            0                   cadfaa6eb6060       kube-apiserver-addons-317784
	c1f9b0cc72b7b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   f35540a56b98c       etcd-addons-317784
	505ab9c4cf6d2       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             4 minutes ago       Running             kube-controller-manager   0                   a3cb989518dfa       kube-controller-manager-addons-317784
	dff28b8dc980b       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             4 minutes ago       Running             kube-scheduler            0                   f27f11921c2c3       kube-scheduler-addons-317784
	
	* 
	* ==> coredns [09b803467d9c556e1ff7f23cd1d1f99239fa50fd9c697a7545f0e65ad3fce2a4] <==
	* [INFO] 10.244.0.9:41346 - 19020 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131778s
	[INFO] 10.244.0.9:58603 - 61307 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071162s
	[INFO] 10.244.0.9:58603 - 63870 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000265287s
	[INFO] 10.244.0.9:50521 - 2467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000439468s
	[INFO] 10.244.0.9:50521 - 17309 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075679s
	[INFO] 10.244.0.9:34498 - 13348 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00984641s
	[INFO] 10.244.0.9:34498 - 64042 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000121428s
	[INFO] 10.244.0.9:35561 - 28704 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000580894s
	[INFO] 10.244.0.9:35561 - 40996 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000229485s
	[INFO] 10.244.0.9:60718 - 58845 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104188s
	[INFO] 10.244.0.9:60718 - 43396 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110913s
	[INFO] 10.244.0.9:49511 - 11468 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050358s
	[INFO] 10.244.0.9:49511 - 42446 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039897s
	[INFO] 10.244.0.9:38290 - 28507 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050005s
	[INFO] 10.244.0.9:38290 - 14681 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000202228s
	[INFO] 10.244.0.21:40537 - 4376 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000263958s
	[INFO] 10.244.0.21:42553 - 23185 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000081253s
	[INFO] 10.244.0.21:57194 - 2973 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014384s
	[INFO] 10.244.0.21:48501 - 9857 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089562s
	[INFO] 10.244.0.21:33440 - 31784 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103644s
	[INFO] 10.244.0.21:51334 - 21085 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058908s
	[INFO] 10.244.0.21:43251 - 14905 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000753986s
	[INFO] 10.244.0.21:36823 - 28834 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000367469s
	[INFO] 10.244.0.25:40537 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00023642s
	[INFO] 10.244.0.25:53536 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173751s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-317784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-317784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=addons-317784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T14_39_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-317784
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 14:39:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-317784
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 14:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:42:29 +0000   Tue, 14 Nov 2023 14:39:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:42:29 +0000   Tue, 14 Nov 2023 14:39:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:42:29 +0000   Tue, 14 Nov 2023 14:39:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:42:29 +0000   Tue, 14 Nov 2023 14:39:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    addons-317784
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 a83e3ef3393c4c0ebdac4f3d3aadc38f
	  System UUID:                a83e3ef3-393c-4c0e-bdac-4f3d3aadc38f
	  Boot ID:                    244a92c1-0d37-446c-b5f0-87cca554f62d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-tx9zc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-fr8lj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  headlamp                    headlamp-777fd4b855-lx8bp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 coredns-5dd5756b68-97twm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m8s
	  kube-system                 etcd-addons-317784                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-apiserver-addons-317784             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-controller-manager-addons-317784    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-5jq48                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-addons-317784             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 registry-frqvq                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 registry-proxy-kh6p9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m50s  kube-proxy       
	  Normal  Starting                 4m22s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s  kubelet          Node addons-317784 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s  kubelet          Node addons-317784 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s  kubelet          Node addons-317784 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m21s  kubelet          Node addons-317784 status is now: NodeReady
	  Normal  RegisteredNode           4m9s   node-controller  Node addons-317784 event: Registered Node addons-317784 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.378192] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.375418] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147064] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.040265] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.040617] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.102013] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.136935] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.102659] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.215677] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +11.387383] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +8.257742] systemd-fstab-generator[1244]: Ignoring "noauto" for root device
	[Nov14 14:40] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.014432] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.027775] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.883546] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.618813] kauditd_printk_skb: 14 callbacks suppressed
	[Nov14 14:41] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.098119] kauditd_printk_skb: 14 callbacks suppressed
	[Nov14 14:42] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.527856] kauditd_printk_skb: 11 callbacks suppressed
	[ +40.402933] kauditd_printk_skb: 12 callbacks suppressed
	[Nov14 14:44] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [c1f9b0cc72b7becbdf494fe2748caed70a4e53672c513c7b0ff2fe2eb2e4fb02] <==
	* {"level":"info","ts":"2023-11-14T14:41:27.009211Z","caller":"traceutil/trace.go:171","msg":"trace[848038853] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1090; }","duration":"316.987871ms","start":"2023-11-14T14:41:26.692216Z","end":"2023-11-14T14:41:27.009204Z","steps":["trace[848038853] 'agreement among raft nodes before linearized reading'  (duration: 316.857452ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:41:27.00923Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T14:41:26.692203Z","time spent":"317.022675ms","remote":"127.0.0.1:37518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13884,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2023-11-14T14:41:27.009394Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.736494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T14:41:27.009511Z","caller":"traceutil/trace.go:171","msg":"trace[1894288230] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1090; }","duration":"159.843458ms","start":"2023-11-14T14:41:26.849644Z","end":"2023-11-14T14:41:27.009487Z","steps":["trace[1894288230] 'agreement among raft nodes before linearized reading'  (duration: 159.72045ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T14:41:30.629956Z","caller":"traceutil/trace.go:171","msg":"trace[1998181659] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"214.596352ms","start":"2023-11-14T14:41:30.415345Z","end":"2023-11-14T14:41:30.629941Z","steps":["trace[1998181659] 'process raft request'  (duration: 214.467637ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:41:30.63225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.265651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-11-14T14:41:30.632312Z","caller":"traceutil/trace.go:171","msg":"trace[1526788742] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1095; }","duration":"159.337168ms","start":"2023-11-14T14:41:30.472964Z","end":"2023-11-14T14:41:30.632301Z","steps":["trace[1526788742] 'agreement among raft nodes before linearized reading'  (duration: 159.20187ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T14:41:30.632257Z","caller":"traceutil/trace.go:171","msg":"trace[1226340439] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"159.249558ms","start":"2023-11-14T14:41:30.472988Z","end":"2023-11-14T14:41:30.632237Z","steps":["trace[1226340439] 'read index received'  (duration: 159.000396ms)","trace[1226340439] 'applied index is now lower than readState.Index'  (duration: 247.526µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-14T14:41:30.633961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.390237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10948"}
	{"level":"info","ts":"2023-11-14T14:41:30.634014Z","caller":"traceutil/trace.go:171","msg":"trace[1382020109] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1095; }","duration":"140.44875ms","start":"2023-11-14T14:41:30.493558Z","end":"2023-11-14T14:41:30.634006Z","steps":["trace[1382020109] 'agreement among raft nodes before linearized reading'  (duration: 140.354087ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T14:42:10.977254Z","caller":"traceutil/trace.go:171","msg":"trace[18833956] linearizableReadLoop","detail":"{readStateIndex:1448; appliedIndex:1447; }","duration":"344.336265ms","start":"2023-11-14T14:42:10.632874Z","end":"2023-11-14T14:42:10.97721Z","steps":["trace[18833956] 'read index received'  (duration: 344.058219ms)","trace[18833956] 'applied index is now lower than readState.Index'  (duration: 277.425µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-14T14:42:10.977581Z","caller":"traceutil/trace.go:171","msg":"trace[1481220825] transaction","detail":"{read_only:false; response_revision:1400; number_of_response:1; }","duration":"388.500995ms","start":"2023-11-14T14:42:10.589059Z","end":"2023-11-14T14:42:10.97756Z","steps":["trace[1481220825] 'process raft request'  (duration: 387.924189ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:42:10.977819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.901729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T14:42:10.977899Z","caller":"traceutil/trace.go:171","msg":"trace[1386450767] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1400; }","duration":"192.989136ms","start":"2023-11-14T14:42:10.784897Z","end":"2023-11-14T14:42:10.977886Z","steps":["trace[1386450767] 'agreement among raft nodes before linearized reading'  (duration: 192.866179ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:42:10.977953Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.096063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:1 size:2270"}
	{"level":"info","ts":"2023-11-14T14:42:10.978004Z","caller":"traceutil/trace.go:171","msg":"trace[1894953368] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1400; }","duration":"345.146064ms","start":"2023-11-14T14:42:10.632851Z","end":"2023-11-14T14:42:10.977997Z","steps":["trace[1894953368] 'agreement among raft nodes before linearized reading'  (duration: 345.070609ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:42:10.978032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T14:42:10.632839Z","time spent":"345.187231ms","remote":"127.0.0.1:37594","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":2293,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" "}
	{"level":"warn","ts":"2023-11-14T14:42:10.978262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.655041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2023-11-14T14:42:10.978337Z","caller":"traceutil/trace.go:171","msg":"trace[532422949] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1400; }","duration":"145.70234ms","start":"2023-11-14T14:42:10.832598Z","end":"2023-11-14T14:42:10.978301Z","steps":["trace[532422949] 'agreement among raft nodes before linearized reading'  (duration: 145.629237ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:42:10.977841Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T14:42:10.589046Z","time spent":"388.564833ms","remote":"127.0.0.1:37536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1342 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2023-11-14T14:42:10.977901Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.552023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T14:42:10.978558Z","caller":"traceutil/trace.go:171","msg":"trace[1481163529] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1400; }","duration":"129.206996ms","start":"2023-11-14T14:42:10.849344Z","end":"2023-11-14T14:42:10.978551Z","steps":["trace[1481163529] 'agreement among raft nodes before linearized reading'  (duration: 128.532192ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T14:42:13.887797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.545067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/\" range_end:\"/registry/pods/gadget0\" ","response":"range_response_count:1 size:7521"}
	{"level":"info","ts":"2023-11-14T14:42:13.88794Z","caller":"traceutil/trace.go:171","msg":"trace[1280161867] range","detail":"{range_begin:/registry/pods/gadget/; range_end:/registry/pods/gadget0; response_count:1; response_revision:1426; }","duration":"234.729759ms","start":"2023-11-14T14:42:13.653194Z","end":"2023-11-14T14:42:13.887924Z","steps":["trace[1280161867] 'range keys from in-memory index tree'  (duration: 234.313368ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T14:42:19.466636Z","caller":"traceutil/trace.go:171","msg":"trace[2139223082] transaction","detail":"{read_only:false; response_revision:1458; number_of_response:1; }","duration":"147.534361ms","start":"2023-11-14T14:42:19.319086Z","end":"2023-11-14T14:42:19.466621Z","steps":["trace[2139223082] 'process raft request'  (duration: 147.363799ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [b6a01da282f10d976c5937ae14680cb8b71cb208bd46a9bd69ce8c16ec813aa0] <==
	* 2023/11/14 14:41:34 GCP Auth Webhook started!
	2023/11/14 14:41:39 Ready to marshal response ...
	2023/11/14 14:41:39 Ready to write response ...
	2023/11/14 14:41:42 Ready to marshal response ...
	2023/11/14 14:41:42 Ready to write response ...
	2023/11/14 14:41:42 Ready to marshal response ...
	2023/11/14 14:41:42 Ready to write response ...
	2023/11/14 14:41:42 Ready to marshal response ...
	2023/11/14 14:41:42 Ready to write response ...
	2023/11/14 14:41:42 Ready to marshal response ...
	2023/11/14 14:41:42 Ready to write response ...
	2023/11/14 14:41:44 Ready to marshal response ...
	2023/11/14 14:41:44 Ready to write response ...
	2023/11/14 14:41:47 Ready to marshal response ...
	2023/11/14 14:41:47 Ready to write response ...
	2023/11/14 14:41:47 Ready to marshal response ...
	2023/11/14 14:41:47 Ready to write response ...
	2023/11/14 14:42:02 Ready to marshal response ...
	2023/11/14 14:42:02 Ready to write response ...
	2023/11/14 14:42:07 Ready to marshal response ...
	2023/11/14 14:42:07 Ready to write response ...
	2023/11/14 14:42:30 Ready to marshal response ...
	2023/11/14 14:42:30 Ready to write response ...
	2023/11/14 14:44:05 Ready to marshal response ...
	2023/11/14 14:44:05 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  14:44:16 up 5 min,  0 users,  load average: 1.89, 2.14, 1.05
	Linux addons-317784 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ba4a05a0c7a22fc44e9a65a0f54c73a71f593ba5e02579e1a2223dab6c584ebf] <==
	* E1114 14:42:18.329481       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1114 14:42:23.080086       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1114 14:42:47.810619       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 14:42:47.810702       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 14:42:47.823306       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 14:42:47.823398       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 14:42:47.874340       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 14:42:47.874453       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 14:42:47.881681       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 14:42:47.881775       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 14:42:47.901886       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 14:42:47.901962       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 14:42:47.928355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 14:42:47.928457       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 14:42:47.961620       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 14:42:47.961858       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 14:42:47.983281       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 14:42:47.983392       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1114 14:42:48.883257       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1114 14:42:48.983169       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1114 14:42:49.023377       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1114 14:42:57.642763       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1114 14:44:05.836859       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.52.97"}
	E1114 14:44:08.321341       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1114 14:44:11.154202       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [505ab9c4cf6d2ac42836724ad16177658fc9b94a1d088704077cff36f8f09887] <==
	* W1114 14:43:30.563078       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 14:43:30.563292       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 14:43:30.675493       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 14:43:30.675589       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 14:43:30.694339       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 14:43:30.694427       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 14:43:32.345557       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 14:43:32.345584       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 14:44:02.671657       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 14:44:02.671768       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1114 14:44:05.601501       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1114 14:44:05.636615       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-tx9zc"
	I1114 14:44:05.645836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.476741ms"
	I1114 14:44:05.652395       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.489584ms"
	I1114 14:44:05.679338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="26.877985ms"
	I1114 14:44:05.679577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="94.452µs"
	I1114 14:44:08.194570       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1114 14:44:08.199616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="6.069µs"
	I1114 14:44:08.209564       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1114 14:44:08.415670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="22.261501ms"
	I1114 14:44:08.415761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.919µs"
	W1114 14:44:09.361834       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 14:44:09.361994       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 14:44:16.778281       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 14:44:16.778517       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [cea1861be3ae856b5e08176524c4fec0e9ab11c672cb6dc76c599084e0276653] <==
	* I1114 14:40:24.627085       1 server_others.go:69] "Using iptables proxy"
	I1114 14:40:25.293037       1 node.go:141] Successfully retrieved node IP: 192.168.39.16
	I1114 14:40:25.734833       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 14:40:25.734935       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 14:40:25.836357       1 server_others.go:152] "Using iptables Proxier"
	I1114 14:40:25.836440       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 14:40:25.836695       1 server.go:846] "Version info" version="v1.28.3"
	I1114 14:40:25.836706       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 14:40:25.851680       1 config.go:188] "Starting service config controller"
	I1114 14:40:25.851838       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 14:40:25.851880       1 config.go:97] "Starting endpoint slice config controller"
	I1114 14:40:25.851884       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 14:40:25.856832       1 config.go:315] "Starting node config controller"
	I1114 14:40:25.856843       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 14:40:25.952009       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 14:40:25.952252       1 shared_informer.go:318] Caches are synced for service config
	I1114 14:40:25.960853       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [dff28b8dc980b3aa1c8c5c2c90d718407cb50f03747da6af20946acb7cd0e539] <==
	* W1114 14:39:51.529533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1114 14:39:51.530067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 14:39:51.530171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 14:39:51.530227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1114 14:39:51.530271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 14:39:51.531565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 14:39:51.531572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 14:39:51.531683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 14:39:51.531839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 14:39:51.531925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:39:51.531993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 14:39:51.532063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1114 14:39:52.342331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 14:39:52.342358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1114 14:39:52.481787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 14:39:52.481844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1114 14:39:52.569316       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 14:39:52.569366       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 14:39:52.601348       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:39:52.601406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 14:39:52.634998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 14:39:52.635048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 14:39:52.679528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 14:39:52.679582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1114 14:39:55.503840       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 14:39:21 UTC, ends at Tue 2023-11-14 14:44:16 UTC. --
	Nov 14 14:44:05 addons-317784 kubelet[1251]: I1114 14:44:05.650760    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="42e7b085-9279-42c4-90f9-6feff2ec6f1e" containerName="node-driver-registrar"
	Nov 14 14:44:05 addons-317784 kubelet[1251]: I1114 14:44:05.650766    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="07e1487b-0aca-47f1-94c6-c98baaf75535" containerName="csi-resizer"
	Nov 14 14:44:05 addons-317784 kubelet[1251]: I1114 14:44:05.650772    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="ea8ad365-92c4-44cf-86e7-a36669bf2673" containerName="volume-snapshot-controller"
	Nov 14 14:44:05 addons-317784 kubelet[1251]: I1114 14:44:05.650777    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="42e7b085-9279-42c4-90f9-6feff2ec6f1e" containerName="liveness-probe"
	Nov 14 14:44:05 addons-317784 kubelet[1251]: I1114 14:44:05.650784    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="53c9b4c7-a3e2-49a4-af10-efc96caa257e" containerName="task-pv-container"
	Nov 14 14:44:05 addons-317784 kubelet[1251]: I1114 14:44:05.758077    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8320aa79-9eb4-4015-b228-a9fea284894e-gcp-creds\") pod \"hello-world-app-5d77478584-tx9zc\" (UID: \"8320aa79-9eb4-4015-b228-a9fea284894e\") " pod="default/hello-world-app-5d77478584-tx9zc"
	Nov 14 14:44:05 addons-317784 kubelet[1251]: I1114 14:44:05.758358    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c2zp\" (UniqueName: \"kubernetes.io/projected/8320aa79-9eb4-4015-b228-a9fea284894e-kube-api-access-5c2zp\") pod \"hello-world-app-5d77478584-tx9zc\" (UID: \"8320aa79-9eb4-4015-b228-a9fea284894e\") " pod="default/hello-world-app-5d77478584-tx9zc"
	Nov 14 14:44:06 addons-317784 kubelet[1251]: I1114 14:44:06.969845    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsxhw\" (UniqueName: \"kubernetes.io/projected/db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b-kube-api-access-nsxhw\") pod \"db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b\" (UID: \"db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b\") "
	Nov 14 14:44:06 addons-317784 kubelet[1251]: I1114 14:44:06.977594    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b-kube-api-access-nsxhw" (OuterVolumeSpecName: "kube-api-access-nsxhw") pod "db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b" (UID: "db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b"). InnerVolumeSpecName "kube-api-access-nsxhw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 14 14:44:07 addons-317784 kubelet[1251]: I1114 14:44:07.070384    1251 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nsxhw\" (UniqueName: \"kubernetes.io/projected/db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b-kube-api-access-nsxhw\") on node \"addons-317784\" DevicePath \"\""
	Nov 14 14:44:07 addons-317784 kubelet[1251]: I1114 14:44:07.349969    1251 scope.go:117] "RemoveContainer" containerID="d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3"
	Nov 14 14:44:07 addons-317784 kubelet[1251]: I1114 14:44:07.537809    1251 scope.go:117] "RemoveContainer" containerID="d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3"
	Nov 14 14:44:07 addons-317784 kubelet[1251]: E1114 14:44:07.538510    1251 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3\": container with ID starting with d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3 not found: ID does not exist" containerID="d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3"
	Nov 14 14:44:07 addons-317784 kubelet[1251]: I1114 14:44:07.538588    1251 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3"} err="failed to get container status \"d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3\": rpc error: code = NotFound desc = could not find container \"d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3\": container with ID starting with d6db3ccc8731eb5ab0b6a39ae9964192a2198a4f86d59628c50cefd30f587fe3 not found: ID does not exist"
	Nov 14 14:44:08 addons-317784 kubelet[1251]: I1114 14:44:08.781413    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1fc761ba-8d27-4b75-86ac-042563877790" path="/var/lib/kubelet/pods/1fc761ba-8d27-4b75-86ac-042563877790/volumes"
	Nov 14 14:44:08 addons-317784 kubelet[1251]: I1114 14:44:08.781981    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7bcda67e-c991-49a0-9a5a-7123473c3d67" path="/var/lib/kubelet/pods/7bcda67e-c991-49a0-9a5a-7123473c3d67/volumes"
	Nov 14 14:44:08 addons-317784 kubelet[1251]: I1114 14:44:08.782564    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b" path="/var/lib/kubelet/pods/db21ecb2-dc98-4c4c-8c4a-c1d6fe89ae8b/volumes"
	Nov 14 14:44:11 addons-317784 kubelet[1251]: I1114 14:44:11.608697    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f4fe5be-a92a-4711-9615-4091dbade91d-webhook-cert\") pod \"3f4fe5be-a92a-4711-9615-4091dbade91d\" (UID: \"3f4fe5be-a92a-4711-9615-4091dbade91d\") "
	Nov 14 14:44:11 addons-317784 kubelet[1251]: I1114 14:44:11.608779    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnrkm\" (UniqueName: \"kubernetes.io/projected/3f4fe5be-a92a-4711-9615-4091dbade91d-kube-api-access-cnrkm\") pod \"3f4fe5be-a92a-4711-9615-4091dbade91d\" (UID: \"3f4fe5be-a92a-4711-9615-4091dbade91d\") "
	Nov 14 14:44:11 addons-317784 kubelet[1251]: I1114 14:44:11.612290    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4fe5be-a92a-4711-9615-4091dbade91d-kube-api-access-cnrkm" (OuterVolumeSpecName: "kube-api-access-cnrkm") pod "3f4fe5be-a92a-4711-9615-4091dbade91d" (UID: "3f4fe5be-a92a-4711-9615-4091dbade91d"). InnerVolumeSpecName "kube-api-access-cnrkm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 14 14:44:11 addons-317784 kubelet[1251]: I1114 14:44:11.614270    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4fe5be-a92a-4711-9615-4091dbade91d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3f4fe5be-a92a-4711-9615-4091dbade91d" (UID: "3f4fe5be-a92a-4711-9615-4091dbade91d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 14:44:11 addons-317784 kubelet[1251]: I1114 14:44:11.710101    1251 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f4fe5be-a92a-4711-9615-4091dbade91d-webhook-cert\") on node \"addons-317784\" DevicePath \"\""
	Nov 14 14:44:11 addons-317784 kubelet[1251]: I1114 14:44:11.710236    1251 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cnrkm\" (UniqueName: \"kubernetes.io/projected/3f4fe5be-a92a-4711-9615-4091dbade91d-kube-api-access-cnrkm\") on node \"addons-317784\" DevicePath \"\""
	Nov 14 14:44:12 addons-317784 kubelet[1251]: I1114 14:44:12.405592    1251 scope.go:117] "RemoveContainer" containerID="e901921b17ee9edee81f8856467c65d4c2a156b50b834133aed70f5f4b553ff0"
	Nov 14 14:44:12 addons-317784 kubelet[1251]: I1114 14:44:12.782420    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3f4fe5be-a92a-4711-9615-4091dbade91d" path="/var/lib/kubelet/pods/3f4fe5be-a92a-4711-9615-4091dbade91d/volumes"
	
	* 
	* ==> storage-provisioner [14755bac67833034eb43bd6ab601336e699ee8d5fc122106bf410928f5e351db] <==
	* I1114 14:40:24.346850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1114 14:40:54.392047       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [226be02a2e442cd5048a19d0dc1e08fee4f7e97108673ba879ca1357c0838514] <==
	* I1114 14:40:56.131435       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 14:40:56.148785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 14:40:56.148927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 14:40:56.169906       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 14:40:56.172213       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-317784_8a56bc5e-916b-4506-ba59-40b1e3ec7ba5!
	I1114 14:40:56.182208       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d057e82-a032-438c-96d7-82fbcaa8824b", APIVersion:"v1", ResourceVersion:"903", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-317784_8a56bc5e-916b-4506-ba59-40b1e3ec7ba5 became leader
	I1114 14:40:56.274510       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-317784_8a56bc5e-916b-4506-ba59-40b1e3ec7ba5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-317784 -n addons-317784
helpers_test.go:261: (dbg) Run:  kubectl --context addons-317784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (162.99s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-317784
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-317784: exit status 82 (2m1.613427969s)

                                                
                                                
-- stdout --
	* Stopping node "addons-317784"  ...
	* Stopping node "addons-317784"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-317784" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-317784
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-317784: exit status 11 (21.705068309s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-317784" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-317784
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-317784: exit status 11 (6.14466664s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-317784" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-317784
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-317784: exit status 11 (6.143111336s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-317784" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image load --daemon gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 image load --daemon gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr: (2.673144574s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 image ls: (2.381120914s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-593453" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (166.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-944535 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-944535 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.940091606s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-944535 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-944535 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c0772485-7b80-43e3-95b2-80b72f90f329] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c0772485-7b80-43e3-95b2-80b72f90f329] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.012386522s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-944535 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1114 14:54:18.421526  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-944535 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.93743285s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-944535 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-944535 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.198
E1114 14:56:27.620991  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:27.626287  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:27.636632  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-944535 addons disable ingress-dns --alsologtostderr -v=1
E1114 14:56:27.657789  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:27.698194  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:27.779371  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:27.939856  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:28.260393  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:28.900789  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-944535 addons disable ingress-dns --alsologtostderr -v=1: (2.404346312s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-944535 addons disable ingress --alsologtostderr -v=1
E1114 14:56:30.180983  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:32.741627  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:56:34.576697  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-944535 addons disable ingress --alsologtostderr -v=1: (7.705839478s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-944535 -n ingress-addon-legacy-944535
E1114 14:56:37.862307  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-944535 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-944535 logs -n 25: (1.152692508s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image     | functional-593453 image load                                              | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:51 UTC | 14 Nov 23 14:51 UTC |
	|           | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|           | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image     | functional-593453 image ls                                                | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:51 UTC | 14 Nov 23 14:51 UTC |
	| image     | functional-593453 image save --daemon                                     | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:51 UTC | 14 Nov 23 14:52 UTC |
	|           | gcr.io/google-containers/addon-resizer:functional-593453                  |                             |         |         |                     |                     |
	|           | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| dashboard | --url --port 36195                                                        | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | -p functional-593453                                                      |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| service   | functional-593453 service list                                            | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	| service   | functional-593453 service                                                 | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | hello-node-connect --url                                                  |                             |         |         |                     |                     |
	| image     | functional-593453                                                         | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | image ls --format short                                                   |                             |         |         |                     |                     |
	|           | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image     | functional-593453                                                         | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|           | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh       | functional-593453 ssh pgrep                                               | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC |                     |
	|           | buildkitd                                                                 |                             |         |         |                     |                     |
	| service   | functional-593453 service list                                            | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | -o json                                                                   |                             |         |         |                     |                     |
	| image     | functional-593453                                                         | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | image ls --format json                                                    |                             |         |         |                     |                     |
	|           | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image     | functional-593453 image build -t                                          | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | localhost/my-image:functional-593453                                      |                             |         |         |                     |                     |
	|           | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image     | functional-593453                                                         | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | image ls --format table                                                   |                             |         |         |                     |                     |
	|           | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| service   | functional-593453 service                                                 | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | --namespace=default --https                                               |                             |         |         |                     |                     |
	|           | --url hello-node                                                          |                             |         |         |                     |                     |
	| service   | functional-593453                                                         | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | service hello-node --url                                                  |                             |         |         |                     |                     |
	|           | --format={{.IP}}                                                          |                             |         |         |                     |                     |
	| service   | functional-593453 service                                                 | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	|           | hello-node --url                                                          |                             |         |         |                     |                     |
	| image     | functional-593453 image ls                                                | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	| delete    | -p functional-593453                                                      | functional-593453           | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:52 UTC |
	| start     | -p ingress-addon-legacy-944535                                            | ingress-addon-legacy-944535 | jenkins | v1.32.0 | 14 Nov 23 14:52 UTC | 14 Nov 23 14:53 UTC |
	|           | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|           | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|           | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|           | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|           | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons    | ingress-addon-legacy-944535                                               | ingress-addon-legacy-944535 | jenkins | v1.32.0 | 14 Nov 23 14:53 UTC | 14 Nov 23 14:53 UTC |
	|           | addons enable ingress                                                     |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons    | ingress-addon-legacy-944535                                               | ingress-addon-legacy-944535 | jenkins | v1.32.0 | 14 Nov 23 14:53 UTC | 14 Nov 23 14:53 UTC |
	|           | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh       | ingress-addon-legacy-944535                                               | ingress-addon-legacy-944535 | jenkins | v1.32.0 | 14 Nov 23 14:54 UTC |                     |
	|           | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|           | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip        | ingress-addon-legacy-944535 ip                                            | ingress-addon-legacy-944535 | jenkins | v1.32.0 | 14 Nov 23 14:56 UTC | 14 Nov 23 14:56 UTC |
	| addons    | ingress-addon-legacy-944535                                               | ingress-addon-legacy-944535 | jenkins | v1.32.0 | 14 Nov 23 14:56 UTC | 14 Nov 23 14:56 UTC |
	|           | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons    | ingress-addon-legacy-944535                                               | ingress-addon-legacy-944535 | jenkins | v1.32.0 | 14 Nov 23 14:56 UTC | 14 Nov 23 14:56 UTC |
	|           | addons disable ingress                                                    |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 14:52:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 14:52:19.990259  840593 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:52:19.990415  840593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:52:19.990428  840593 out.go:309] Setting ErrFile to fd 2...
	I1114 14:52:19.990435  840593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:52:19.990663  840593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 14:52:19.991269  840593 out.go:303] Setting JSON to false
	I1114 14:52:19.992322  840593 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":41692,"bootTime":1699931848,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 14:52:19.992390  840593 start.go:138] virtualization: kvm guest
	I1114 14:52:19.994637  840593 out.go:177] * [ingress-addon-legacy-944535] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 14:52:19.996028  840593 notify.go:220] Checking for updates...
	I1114 14:52:19.996037  840593 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 14:52:19.997422  840593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:52:19.998837  840593 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:52:20.000246  840593 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:52:20.001837  840593 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 14:52:20.003164  840593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:52:20.004854  840593 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:52:20.040878  840593 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 14:52:20.042208  840593 start.go:298] selected driver: kvm2
	I1114 14:52:20.042219  840593 start.go:902] validating driver "kvm2" against <nil>
	I1114 14:52:20.042231  840593 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:52:20.042960  840593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:52:20.043038  840593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 14:52:20.058650  840593 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 14:52:20.058744  840593 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 14:52:20.058969  840593 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 14:52:20.059117  840593 cni.go:84] Creating CNI manager for ""
	I1114 14:52:20.059149  840593 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:52:20.059162  840593 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1114 14:52:20.059174  840593 start_flags.go:323] config:
	{Name:ingress-addon-legacy-944535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-944535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:52:20.059368  840593 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:52:20.061149  840593 out.go:177] * Starting control plane node ingress-addon-legacy-944535 in cluster ingress-addon-legacy-944535
	I1114 14:52:20.062505  840593 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 14:52:20.083226  840593 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1114 14:52:20.083253  840593 cache.go:56] Caching tarball of preloaded images
	I1114 14:52:20.083419  840593 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 14:52:20.085212  840593 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1114 14:52:20.086605  840593 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:52:20.111921  840593 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1114 14:52:23.317907  840593 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:52:23.318029  840593 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:52:24.331584  840593 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1114 14:52:24.331974  840593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/config.json ...
	I1114 14:52:24.332023  840593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/config.json: {Name:mkc179681f70b533f575fb535a49d732b0c1c504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:52:24.332240  840593 start.go:365] acquiring machines lock for ingress-addon-legacy-944535: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 14:52:24.332295  840593 start.go:369] acquired machines lock for "ingress-addon-legacy-944535" in 27.019µs
	I1114 14:52:24.332323  840593 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-944535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-944535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 14:52:24.332435  840593 start.go:125] createHost starting for "" (driver="kvm2")
	I1114 14:52:24.334762  840593 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1114 14:52:24.334959  840593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:52:24.335033  840593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:52:24.349535  840593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
	I1114 14:52:24.350060  840593 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:52:24.350691  840593 main.go:141] libmachine: Using API Version  1
	I1114 14:52:24.350719  840593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:52:24.351147  840593 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:52:24.351358  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetMachineName
	I1114 14:52:24.351523  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:52:24.351691  840593 start.go:159] libmachine.API.Create for "ingress-addon-legacy-944535" (driver="kvm2")
	I1114 14:52:24.351717  840593 client.go:168] LocalClient.Create starting
	I1114 14:52:24.351778  840593 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 14:52:24.351816  840593 main.go:141] libmachine: Decoding PEM data...
	I1114 14:52:24.351832  840593 main.go:141] libmachine: Parsing certificate...
	I1114 14:52:24.351892  840593 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 14:52:24.351910  840593 main.go:141] libmachine: Decoding PEM data...
	I1114 14:52:24.351926  840593 main.go:141] libmachine: Parsing certificate...
	I1114 14:52:24.351945  840593 main.go:141] libmachine: Running pre-create checks...
	I1114 14:52:24.351956  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .PreCreateCheck
	I1114 14:52:24.352324  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetConfigRaw
	I1114 14:52:24.352714  840593 main.go:141] libmachine: Creating machine...
	I1114 14:52:24.352729  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .Create
	I1114 14:52:24.352916  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Creating KVM machine...
	I1114 14:52:24.354249  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found existing default KVM network
	I1114 14:52:24.354969  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:24.354835  840615 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I1114 14:52:24.360195  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | trying to create private KVM network mk-ingress-addon-legacy-944535 192.168.39.0/24...
	I1114 14:52:24.431462  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | private KVM network mk-ingress-addon-legacy-944535 192.168.39.0/24 created
	I1114 14:52:24.431523  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535 ...
	I1114 14:52:24.431544  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:24.431416  840615 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:52:24.431565  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 14:52:24.431642  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 14:52:24.675866  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:24.675735  840615 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa...
	I1114 14:52:25.043640  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:25.043467  840615 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/ingress-addon-legacy-944535.rawdisk...
	I1114 14:52:25.043685  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Writing magic tar header
	I1114 14:52:25.043708  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Writing SSH key tar header
	I1114 14:52:25.043722  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:25.043624  840615 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535 ...
	I1114 14:52:25.043873  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535
	I1114 14:52:25.043908  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535 (perms=drwx------)
	I1114 14:52:25.043932  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 14:52:25.043954  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:52:25.043968  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 14:52:25.044013  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 14:52:25.044058  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 14:52:25.044076  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 14:52:25.044096  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 14:52:25.044112  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 14:52:25.044126  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 14:52:25.044137  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Creating domain...
	I1114 14:52:25.044152  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Checking permissions on dir: /home/jenkins
	I1114 14:52:25.044164  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Checking permissions on dir: /home
	I1114 14:52:25.044178  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Skipping /home - not owner
	I1114 14:52:25.045184  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) define libvirt domain using xml: 
	I1114 14:52:25.045212  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) <domain type='kvm'>
	I1114 14:52:25.045225  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   <name>ingress-addon-legacy-944535</name>
	I1114 14:52:25.045235  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   <memory unit='MiB'>4096</memory>
	I1114 14:52:25.045246  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   <vcpu>2</vcpu>
	I1114 14:52:25.045260  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   <features>
	I1114 14:52:25.045272  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <acpi/>
	I1114 14:52:25.045288  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <apic/>
	I1114 14:52:25.045297  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <pae/>
	I1114 14:52:25.045306  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     
	I1114 14:52:25.045314  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   </features>
	I1114 14:52:25.045321  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   <cpu mode='host-passthrough'>
	I1114 14:52:25.045333  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   
	I1114 14:52:25.045343  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   </cpu>
	I1114 14:52:25.045357  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   <os>
	I1114 14:52:25.045373  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <type>hvm</type>
	I1114 14:52:25.045385  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <boot dev='cdrom'/>
	I1114 14:52:25.045398  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <boot dev='hd'/>
	I1114 14:52:25.045405  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <bootmenu enable='no'/>
	I1114 14:52:25.045415  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   </os>
	I1114 14:52:25.045429  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   <devices>
	I1114 14:52:25.045445  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <disk type='file' device='cdrom'>
	I1114 14:52:25.045461  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/boot2docker.iso'/>
	I1114 14:52:25.045474  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <target dev='hdc' bus='scsi'/>
	I1114 14:52:25.045487  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <readonly/>
	I1114 14:52:25.045498  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     </disk>
	I1114 14:52:25.045509  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <disk type='file' device='disk'>
	I1114 14:52:25.045527  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 14:52:25.045562  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/ingress-addon-legacy-944535.rawdisk'/>
	I1114 14:52:25.045577  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <target dev='hda' bus='virtio'/>
	I1114 14:52:25.045586  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     </disk>
	I1114 14:52:25.045598  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <interface type='network'>
	I1114 14:52:25.045612  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <source network='mk-ingress-addon-legacy-944535'/>
	I1114 14:52:25.045626  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <model type='virtio'/>
	I1114 14:52:25.045641  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     </interface>
	I1114 14:52:25.045654  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <interface type='network'>
	I1114 14:52:25.045667  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <source network='default'/>
	I1114 14:52:25.045677  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <model type='virtio'/>
	I1114 14:52:25.045690  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     </interface>
	I1114 14:52:25.045703  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <serial type='pty'>
	I1114 14:52:25.045780  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <target port='0'/>
	I1114 14:52:25.045815  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     </serial>
	I1114 14:52:25.045832  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <console type='pty'>
	I1114 14:52:25.045850  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <target type='serial' port='0'/>
	I1114 14:52:25.045862  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     </console>
	I1114 14:52:25.045870  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     <rng model='virtio'>
	I1114 14:52:25.045880  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)       <backend model='random'>/dev/random</backend>
	I1114 14:52:25.045910  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     </rng>
	I1114 14:52:25.045934  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     
	I1114 14:52:25.045954  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)     
	I1114 14:52:25.045968  840593 main.go:141] libmachine: (ingress-addon-legacy-944535)   </devices>
	I1114 14:52:25.045982  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) </domain>
	I1114 14:52:25.046000  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) 
	I1114 14:52:25.049976  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:98:61:9b in network default
	I1114 14:52:25.050641  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Ensuring networks are active...
	I1114 14:52:25.050669  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:25.051331  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Ensuring network default is active
	I1114 14:52:25.051559  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Ensuring network mk-ingress-addon-legacy-944535 is active
	I1114 14:52:25.051997  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Getting domain xml...
	I1114 14:52:25.052599  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Creating domain...
	I1114 14:52:26.286505  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Waiting to get IP...
	I1114 14:52:26.287378  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:26.287774  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:26.287833  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:26.287751  840615 retry.go:31] will retry after 234.097156ms: waiting for machine to come up
	I1114 14:52:26.523454  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:26.523990  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:26.524016  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:26.523921  840615 retry.go:31] will retry after 333.035733ms: waiting for machine to come up
	I1114 14:52:26.858576  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:26.859107  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:26.859141  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:26.859051  840615 retry.go:31] will retry after 440.645077ms: waiting for machine to come up
	I1114 14:52:27.302037  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:27.302584  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:27.302616  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:27.302540  840615 retry.go:31] will retry after 598.728329ms: waiting for machine to come up
	I1114 14:52:27.903131  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:27.903587  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:27.903624  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:27.903527  840615 retry.go:31] will retry after 722.175769ms: waiting for machine to come up
	I1114 14:52:28.627477  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:28.627881  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:28.627912  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:28.627820  840615 retry.go:31] will retry after 929.064252ms: waiting for machine to come up
	I1114 14:52:29.558712  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:29.559217  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:29.559249  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:29.559124  840615 retry.go:31] will retry after 724.10855ms: waiting for machine to come up
	I1114 14:52:30.285054  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:30.285690  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:30.285738  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:30.285585  840615 retry.go:31] will retry after 1.128893429s: waiting for machine to come up
	I1114 14:52:31.415957  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:31.416361  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:31.416396  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:31.416293  840615 retry.go:31] will retry after 1.385326853s: waiting for machine to come up
	I1114 14:52:32.803910  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:32.804409  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:32.804440  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:32.804346  840615 retry.go:31] will retry after 1.593127374s: waiting for machine to come up
	I1114 14:52:34.398627  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:34.398979  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:34.399013  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:34.398913  840615 retry.go:31] will retry after 2.902628152s: waiting for machine to come up
	I1114 14:52:37.304914  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:37.305426  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:37.305462  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:37.305366  840615 retry.go:31] will retry after 3.540542619s: waiting for machine to come up
	I1114 14:52:40.847905  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:40.848405  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:40.848430  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:40.848360  840615 retry.go:31] will retry after 3.264810747s: waiting for machine to come up
	I1114 14:52:44.116800  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:44.117093  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find current IP address of domain ingress-addon-legacy-944535 in network mk-ingress-addon-legacy-944535
	I1114 14:52:44.117121  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | I1114 14:52:44.117032  840615 retry.go:31] will retry after 3.43772907s: waiting for machine to come up
	I1114 14:52:47.559020  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.559507  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Found IP for machine: 192.168.39.198
	I1114 14:52:47.559538  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has current primary IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.559549  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Reserving static IP address...
	I1114 14:52:47.559917  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-944535", mac: "52:54:00:7c:ca:1e", ip: "192.168.39.198"} in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.634719  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Getting to WaitForSSH function...
	I1114 14:52:47.634760  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Reserved static IP address: 192.168.39.198
	I1114 14:52:47.634776  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Waiting for SSH to be available...
	I1114 14:52:47.637506  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.637986  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:47.638024  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.638151  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Using SSH client type: external
	I1114 14:52:47.638175  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa (-rw-------)
	I1114 14:52:47.638226  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 14:52:47.638244  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | About to run SSH command:
	I1114 14:52:47.638258  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | exit 0
	I1114 14:52:47.728766  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | SSH cmd err, output: <nil>: 
	I1114 14:52:47.729008  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) KVM machine creation complete!
	I1114 14:52:47.729397  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetConfigRaw
	I1114 14:52:47.730079  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:52:47.730326  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:52:47.730543  840593 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 14:52:47.730564  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetState
	I1114 14:52:47.732198  840593 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 14:52:47.732224  840593 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 14:52:47.732234  840593 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 14:52:47.732246  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:47.735202  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.735658  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:47.735685  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.735888  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:47.736090  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:47.736310  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:47.736461  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:47.736666  840593 main.go:141] libmachine: Using SSH client type: native
	I1114 14:52:47.737109  840593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1114 14:52:47.737128  840593 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 14:52:47.852139  840593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:52:47.852171  840593 main.go:141] libmachine: Detecting the provisioner...
	I1114 14:52:47.852183  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:47.855538  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.855915  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:47.855952  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.856136  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:47.856376  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:47.856575  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:47.856720  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:47.856942  840593 main.go:141] libmachine: Using SSH client type: native
	I1114 14:52:47.857386  840593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1114 14:52:47.857404  840593 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 14:52:47.973518  840593 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 14:52:47.973647  840593 main.go:141] libmachine: found compatible host: buildroot
	I1114 14:52:47.973664  840593 main.go:141] libmachine: Provisioning with buildroot...
	I1114 14:52:47.973676  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetMachineName
	I1114 14:52:47.973991  840593 buildroot.go:166] provisioning hostname "ingress-addon-legacy-944535"
	I1114 14:52:47.974030  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetMachineName
	I1114 14:52:47.974308  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:47.977186  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.977613  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:47.977645  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:47.977739  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:47.977933  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:47.978108  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:47.978265  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:47.978408  840593 main.go:141] libmachine: Using SSH client type: native
	I1114 14:52:47.978754  840593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1114 14:52:47.978769  840593 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-944535 && echo "ingress-addon-legacy-944535" | sudo tee /etc/hostname
	I1114 14:52:48.105332  840593 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-944535
	
	I1114 14:52:48.105369  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:48.108452  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.108806  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:48.108845  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.109089  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:48.109317  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:48.109476  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:48.109624  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:48.109796  840593 main.go:141] libmachine: Using SSH client type: native
	I1114 14:52:48.110115  840593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1114 14:52:48.110133  840593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-944535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-944535/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-944535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:52:48.232971  840593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:52:48.233004  840593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 14:52:48.233046  840593 buildroot.go:174] setting up certificates
	I1114 14:52:48.233061  840593 provision.go:83] configureAuth start
	I1114 14:52:48.233076  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetMachineName
	I1114 14:52:48.233408  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetIP
	I1114 14:52:48.236208  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.236543  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:48.236582  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.236779  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:48.239125  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.239440  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:48.239486  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.239585  840593 provision.go:138] copyHostCerts
	I1114 14:52:48.239619  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 14:52:48.239653  840593 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 14:52:48.239680  840593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 14:52:48.239743  840593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 14:52:48.239823  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 14:52:48.239845  840593 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 14:52:48.239854  840593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 14:52:48.239882  840593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 14:52:48.239924  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 14:52:48.239940  840593 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 14:52:48.239947  840593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 14:52:48.239969  840593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 14:52:48.240013  840593 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-944535 san=[192.168.39.198 192.168.39.198 localhost 127.0.0.1 minikube ingress-addon-legacy-944535]
	I1114 14:52:48.433903  840593 provision.go:172] copyRemoteCerts
	I1114 14:52:48.433968  840593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:52:48.434000  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:48.436714  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.437080  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:48.437109  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.437242  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:48.437481  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:48.437683  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:48.437818  840593 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa Username:docker}
	I1114 14:52:48.526497  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 14:52:48.526584  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:52:48.548717  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 14:52:48.548802  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1114 14:52:48.571060  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 14:52:48.571133  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 14:52:48.593969  840593 provision.go:86] duration metric: configureAuth took 360.892719ms
	I1114 14:52:48.593993  840593 buildroot.go:189] setting minikube options for container-runtime
	I1114 14:52:48.594157  840593 config.go:182] Loaded profile config "ingress-addon-legacy-944535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 14:52:48.594245  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:48.596879  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.597212  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:48.597242  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.597460  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:48.597680  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:48.597849  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:48.598023  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:48.598247  840593 main.go:141] libmachine: Using SSH client type: native
	I1114 14:52:48.598633  840593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1114 14:52:48.598651  840593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:52:48.906445  840593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:52:48.906488  840593 main.go:141] libmachine: Checking connection to Docker...
	I1114 14:52:48.906503  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetURL
	I1114 14:52:48.907941  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Using libvirt version 6000000
	I1114 14:52:48.910424  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.910782  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:48.910808  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.910971  840593 main.go:141] libmachine: Docker is up and running!
	I1114 14:52:48.910982  840593 main.go:141] libmachine: Reticulating splines...
	I1114 14:52:48.910989  840593 client.go:171] LocalClient.Create took 24.559264112s
	I1114 14:52:48.911018  840593 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-944535" took 24.559328875s
	I1114 14:52:48.911029  840593 start.go:300] post-start starting for "ingress-addon-legacy-944535" (driver="kvm2")
	I1114 14:52:48.911044  840593 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:52:48.911062  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:52:48.911323  840593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:52:48.911350  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:48.913874  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.914235  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:48.914266  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:48.914396  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:48.914588  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:48.914783  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:48.914914  840593 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa Username:docker}
	I1114 14:52:49.001660  840593 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:52:49.005774  840593 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 14:52:49.005803  840593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 14:52:49.005873  840593 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 14:52:49.005962  840593 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 14:52:49.005975  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /etc/ssl/certs/8322112.pem
	I1114 14:52:49.006104  840593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:52:49.013978  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 14:52:49.036400  840593 start.go:303] post-start completed in 125.357054ms
	I1114 14:52:49.036481  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetConfigRaw
	I1114 14:52:49.037085  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetIP
	I1114 14:52:49.039838  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.040336  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:49.040388  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.040557  840593 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/config.json ...
	I1114 14:52:49.040732  840593 start.go:128] duration metric: createHost completed in 24.708282354s
	I1114 14:52:49.040778  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:49.042995  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.043339  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:49.043378  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.043570  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:49.043795  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:49.043934  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:49.044106  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:49.044312  840593 main.go:141] libmachine: Using SSH client type: native
	I1114 14:52:49.044649  840593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1114 14:52:49.044662  840593 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 14:52:49.161631  840593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699973569.131324638
	
	I1114 14:52:49.161656  840593 fix.go:206] guest clock: 1699973569.131324638
	I1114 14:52:49.161663  840593 fix.go:219] Guest: 2023-11-14 14:52:49.131324638 +0000 UTC Remote: 2023-11-14 14:52:49.040758249 +0000 UTC m=+29.101641474 (delta=90.566389ms)
	I1114 14:52:49.161707  840593 fix.go:190] guest clock delta is within tolerance: 90.566389ms
	I1114 14:52:49.161713  840593 start.go:83] releasing machines lock for "ingress-addon-legacy-944535", held for 24.829406078s
	I1114 14:52:49.161738  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:52:49.162078  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetIP
	I1114 14:52:49.164994  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.165448  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:49.165480  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.165676  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:52:49.166208  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:52:49.166386  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:52:49.166509  840593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:52:49.166558  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:49.166615  840593 ssh_runner.go:195] Run: cat /version.json
	I1114 14:52:49.166644  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:52:49.169180  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.169357  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.169616  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:49.169650  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.169760  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:49.169795  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:49.169827  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:49.169958  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:49.169991  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:52:49.170140  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:52:49.170140  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:49.170316  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:52:49.170324  840593 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa Username:docker}
	I1114 14:52:49.170429  840593 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa Username:docker}
	I1114 14:52:49.274683  840593 ssh_runner.go:195] Run: systemctl --version
	I1114 14:52:49.280299  840593 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 14:52:49.447643  840593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 14:52:49.453960  840593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 14:52:49.454039  840593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:52:49.471509  840593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:52:49.471531  840593 start.go:472] detecting cgroup driver to use...
	I1114 14:52:49.471604  840593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:52:49.488778  840593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:52:49.503604  840593 docker.go:203] disabling cri-docker service (if available) ...
	I1114 14:52:49.503667  840593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 14:52:49.518860  840593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 14:52:49.533682  840593 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 14:52:49.645656  840593 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 14:52:49.765826  840593 docker.go:219] disabling docker service ...
	I1114 14:52:49.765921  840593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 14:52:49.778297  840593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 14:52:49.789313  840593 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 14:52:49.888856  840593 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 14:52:49.986977  840593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 14:52:50.000781  840593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:52:50.018363  840593 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1114 14:52:50.018453  840593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:52:50.028086  840593 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 14:52:50.028174  840593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:52:50.037409  840593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:52:50.046341  840593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:52:50.055152  840593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 14:52:50.064145  840593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 14:52:50.071896  840593 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 14:52:50.071956  840593 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 14:52:50.083838  840593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 14:52:50.092780  840593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:52:50.190598  840593 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 14:52:50.343048  840593 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 14:52:50.343140  840593 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 14:52:50.348343  840593 start.go:540] Will wait 60s for crictl version
	I1114 14:52:50.348399  840593 ssh_runner.go:195] Run: which crictl
	I1114 14:52:50.352047  840593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 14:52:50.384960  840593 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 14:52:50.385051  840593 ssh_runner.go:195] Run: crio --version
	I1114 14:52:50.428868  840593 ssh_runner.go:195] Run: crio --version
	I1114 14:52:50.489249  840593 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1114 14:52:50.490883  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetIP
	I1114 14:52:50.494143  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:50.494722  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:52:50.494752  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:52:50.494927  840593 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 14:52:50.499256  840593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:52:50.512042  840593 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 14:52:50.512094  840593 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 14:52:50.544971  840593 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1114 14:52:50.545053  840593 ssh_runner.go:195] Run: which lz4
	I1114 14:52:50.548843  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1114 14:52:50.548929  840593 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 14:52:50.553174  840593 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 14:52:50.553199  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1114 14:52:52.441362  840593 crio.go:444] Took 1.892461 seconds to copy over tarball
	I1114 14:52:52.441446  840593 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 14:52:55.590412  840593 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.148929675s)
	I1114 14:52:55.590441  840593 crio.go:451] Took 3.149053 seconds to extract the tarball
	I1114 14:52:55.590452  840593 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 14:52:55.635130  840593 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 14:52:55.710635  840593 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1114 14:52:55.710670  840593 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 14:52:55.710780  840593 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 14:52:55.710814  840593 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1114 14:52:55.710825  840593 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1114 14:52:55.710847  840593 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1114 14:52:55.710852  840593 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 14:52:55.710780  840593 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 14:52:55.710750  840593 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 14:52:55.710763  840593 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 14:52:55.712223  840593 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 14:52:55.712240  840593 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1114 14:52:55.712243  840593 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1114 14:52:55.712256  840593 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1114 14:52:55.712262  840593 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 14:52:55.712216  840593 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 14:52:55.712267  840593 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 14:52:55.712286  840593 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 14:52:55.882332  840593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1114 14:52:55.882543  840593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1114 14:52:55.883446  840593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1114 14:52:55.895922  840593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 14:52:55.896390  840593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1114 14:52:55.901389  840593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1114 14:52:55.933837  840593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1114 14:52:55.996093  840593 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1114 14:52:55.996145  840593 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 14:52:55.996147  840593 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1114 14:52:55.996185  840593 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1114 14:52:55.996191  840593 ssh_runner.go:195] Run: which crictl
	I1114 14:52:55.996227  840593 ssh_runner.go:195] Run: which crictl
	I1114 14:52:55.996824  840593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 14:52:56.009673  840593 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1114 14:52:56.009713  840593 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 14:52:56.009760  840593 ssh_runner.go:195] Run: which crictl
	I1114 14:52:56.064285  840593 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1114 14:52:56.064342  840593 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 14:52:56.064351  840593 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1114 14:52:56.064393  840593 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1114 14:52:56.064402  840593 ssh_runner.go:195] Run: which crictl
	I1114 14:52:56.064439  840593 ssh_runner.go:195] Run: which crictl
	I1114 14:52:56.067182  840593 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1114 14:52:56.067224  840593 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1114 14:52:56.067267  840593 ssh_runner.go:195] Run: which crictl
	I1114 14:52:56.083082  840593 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1114 14:52:56.083145  840593 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 14:52:56.083155  840593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1114 14:52:56.083191  840593 ssh_runner.go:195] Run: which crictl
	I1114 14:52:56.083214  840593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1114 14:52:56.192319  840593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1114 14:52:56.192319  840593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1114 14:52:56.192384  840593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 14:52:56.192455  840593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1114 14:52:56.192569  840593 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1114 14:52:56.192608  840593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1114 14:52:56.192646  840593 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1114 14:52:56.299615  840593 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1114 14:52:56.308001  840593 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1114 14:52:56.308062  840593 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1114 14:52:56.308147  840593 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1114 14:52:56.308258  840593 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1114 14:52:56.308312  840593 cache_images.go:92] LoadImages completed in 597.628645ms
	W1114 14:52:56.308417  840593 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1114 14:52:56.308502  840593 ssh_runner.go:195] Run: crio config
	I1114 14:52:56.371861  840593 cni.go:84] Creating CNI manager for ""
	I1114 14:52:56.371887  840593 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:52:56.371917  840593 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 14:52:56.371945  840593 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-944535 NodeName:ingress-addon-legacy-944535 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 14:52:56.372133  840593 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-944535"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 14:52:56.372251  840593 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-944535 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-944535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 14:52:56.372336  840593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1114 14:52:56.381747  840593 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 14:52:56.381842  840593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 14:52:56.390608  840593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I1114 14:52:56.407163  840593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1114 14:52:56.422672  840593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I1114 14:52:56.440411  840593 ssh_runner.go:195] Run: grep 192.168.39.198	control-plane.minikube.internal$ /etc/hosts
	I1114 14:52:56.444388  840593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:52:56.456376  840593 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535 for IP: 192.168.39.198
	I1114 14:52:56.456418  840593 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:52:56.456618  840593 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 14:52:56.456672  840593 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 14:52:56.456734  840593 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.key
	I1114 14:52:56.456767  840593 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt with IP's: []
	I1114 14:52:56.625887  840593 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt ...
	I1114 14:52:56.625927  840593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: {Name:mka434b784b12852d61bf16d4c7e1f880cb350c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:52:56.626143  840593 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.key ...
	I1114 14:52:56.626166  840593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.key: {Name:mk2179252fc561535dd9b6facd82d3dc325967db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:52:56.626302  840593 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.key.e8b7c679
	I1114 14:52:56.626328  840593 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.crt.e8b7c679 with IP's: [192.168.39.198 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 14:52:56.897447  840593 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.crt.e8b7c679 ...
	I1114 14:52:56.897487  840593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.crt.e8b7c679: {Name:mk930862a596b95b53c479e07b07a99b5d6496f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:52:56.897689  840593 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.key.e8b7c679 ...
	I1114 14:52:56.897723  840593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.key.e8b7c679: {Name:mk3c79c9bdea350cd8ae0c42ec45f3ad6abc41b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:52:56.897834  840593 certs.go:337] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.crt.e8b7c679 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.crt
	I1114 14:52:56.897931  840593 certs.go:341] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.key.e8b7c679 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.key
	I1114 14:52:56.898018  840593 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.key
	I1114 14:52:56.898042  840593 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.crt with IP's: []
	I1114 14:52:56.962386  840593 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.crt ...
	I1114 14:52:56.962429  840593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.crt: {Name:mkf19f0775bd70eb0b27097392ad0e3e3f134808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:52:56.962613  840593 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.key ...
	I1114 14:52:56.962638  840593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.key: {Name:mk84171ae1581b941557579e4f7ae9991cb55644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:52:56.962746  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1114 14:52:56.962790  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1114 14:52:56.962812  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1114 14:52:56.962835  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1114 14:52:56.962857  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 14:52:56.962880  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 14:52:56.962901  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 14:52:56.962925  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 14:52:56.963010  840593 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 14:52:56.963063  840593 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 14:52:56.963082  840593 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 14:52:56.963158  840593 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 14:52:56.963198  840593 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 14:52:56.963243  840593 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 14:52:56.963318  840593 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 14:52:56.963365  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /usr/share/ca-certificates/8322112.pem
	I1114 14:52:56.963389  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:52:56.963410  840593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem -> /usr/share/ca-certificates/832211.pem
	I1114 14:52:56.964066  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 14:52:56.988583  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 14:52:57.010951  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 14:52:57.036840  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 14:52:57.060560  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 14:52:57.083155  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 14:52:57.108089  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 14:52:57.132679  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 14:52:57.157699  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 14:52:57.182263  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 14:52:57.206863  840593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 14:52:57.229518  840593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 14:52:57.246521  840593 ssh_runner.go:195] Run: openssl version
	I1114 14:52:57.252341  840593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 14:52:57.262078  840593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 14:52:57.266729  840593 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 14:52:57.266779  840593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 14:52:57.272651  840593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 14:52:57.282671  840593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 14:52:57.292435  840593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 14:52:57.297289  840593 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 14:52:57.297339  840593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 14:52:57.303284  840593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 14:52:57.313306  840593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 14:52:57.323200  840593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:52:57.328050  840593 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:52:57.328099  840593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:52:57.333742  840593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 14:52:57.343750  840593 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 14:52:57.348215  840593 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:52:57.348264  840593 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-944535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-944535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:52:57.348351  840593 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 14:52:57.348394  840593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 14:52:57.388003  840593 cri.go:89] found id: ""
	I1114 14:52:57.388165  840593 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 14:52:57.397231  840593 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 14:52:57.405665  840593 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 14:52:57.414228  840593 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 14:52:57.414277  840593 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1114 14:52:57.478780  840593 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1114 14:52:57.479270  840593 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 14:52:57.615817  840593 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 14:52:57.616000  840593 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 14:52:57.616166  840593 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 14:52:57.840069  840593 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:52:57.841302  840593 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:52:57.841414  840593 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 14:52:57.966509  840593 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 14:52:58.049441  840593 out.go:204]   - Generating certificates and keys ...
	I1114 14:52:58.049570  840593 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 14:52:58.049801  840593 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 14:52:58.399518  840593 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 14:52:58.519626  840593 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 14:52:58.740502  840593 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 14:52:59.163216  840593 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 14:52:59.334576  840593 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 14:52:59.334789  840593 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-944535 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I1114 14:52:59.588162  840593 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 14:52:59.588413  840593 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-944535 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
	I1114 14:52:59.785805  840593 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 14:52:59.970170  840593 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 14:53:00.063444  840593 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 14:53:00.063621  840593 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 14:53:00.343021  840593 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 14:53:00.549949  840593 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 14:53:00.726593  840593 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 14:53:00.902604  840593 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 14:53:00.903512  840593 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 14:53:00.905465  840593 out.go:204]   - Booting up control plane ...
	I1114 14:53:00.905569  840593 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 14:53:00.910180  840593 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 14:53:00.911531  840593 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 14:53:00.912910  840593 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 14:53:00.916212  840593 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 14:53:09.914949  840593 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003449 seconds
	I1114 14:53:09.915114  840593 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 14:53:09.936823  840593 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 14:53:10.457752  840593 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 14:53:10.457942  840593 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-944535 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 14:53:10.967113  840593 kubeadm.go:322] [bootstrap-token] Using token: t8scw4.35f9bzlrk7kylkhq
	I1114 14:53:10.968701  840593 out.go:204]   - Configuring RBAC rules ...
	I1114 14:53:10.968890  840593 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 14:53:10.973369  840593 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 14:53:10.980512  840593 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 14:53:10.987024  840593 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 14:53:10.989709  840593 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 14:53:10.992863  840593 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 14:53:11.008050  840593 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 14:53:11.271864  840593 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 14:53:11.386594  840593 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 14:53:11.387678  840593 kubeadm.go:322] 
	I1114 14:53:11.387742  840593 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 14:53:11.387750  840593 kubeadm.go:322] 
	I1114 14:53:11.387842  840593 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 14:53:11.387862  840593 kubeadm.go:322] 
	I1114 14:53:11.387893  840593 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 14:53:11.387999  840593 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 14:53:11.388082  840593 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 14:53:11.388106  840593 kubeadm.go:322] 
	I1114 14:53:11.388194  840593 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 14:53:11.388322  840593 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 14:53:11.388404  840593 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 14:53:11.388424  840593 kubeadm.go:322] 
	I1114 14:53:11.388545  840593 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 14:53:11.388644  840593 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 14:53:11.388654  840593 kubeadm.go:322] 
	I1114 14:53:11.388765  840593 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t8scw4.35f9bzlrk7kylkhq \
	I1114 14:53:11.388904  840593 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 14:53:11.388941  840593 kubeadm.go:322]     --control-plane 
	I1114 14:53:11.388949  840593 kubeadm.go:322] 
	I1114 14:53:11.389027  840593 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 14:53:11.389036  840593 kubeadm.go:322] 
	I1114 14:53:11.389104  840593 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t8scw4.35f9bzlrk7kylkhq \
	I1114 14:53:11.389273  840593 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 14:53:11.389765  840593 kubeadm.go:322] W1114 14:52:57.459350     961 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1114 14:53:11.389886  840593 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 14:53:11.390015  840593 kubeadm.go:322] W1114 14:53:00.894253     961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1114 14:53:11.390133  840593 kubeadm.go:322] W1114 14:53:00.895807     961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1114 14:53:11.390153  840593 cni.go:84] Creating CNI manager for ""
	I1114 14:53:11.390163  840593 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:53:11.391820  840593 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 14:53:11.393315  840593 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 14:53:11.403179  840593 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 14:53:11.420036  840593 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 14:53:11.420105  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:11.420126  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=ingress-addon-legacy-944535 minikube.k8s.io/updated_at=2023_11_14T14_53_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:11.823665  840593 ops.go:34] apiserver oom_adj: -16
	I1114 14:53:11.823737  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:11.945861  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:12.524678  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:13.024272  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:13.524782  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:14.024924  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:14.525033  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:15.024268  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:15.524797  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:16.024689  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:16.524205  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:17.024037  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:17.524028  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:18.024386  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:18.524659  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:19.024688  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:19.524064  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:20.024772  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:20.524691  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:21.024348  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:21.524569  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:22.024067  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:22.524855  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:23.024122  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:23.524911  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:24.024361  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:24.524864  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:25.024402  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:25.524393  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:26.024312  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:26.524651  840593 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:53:26.713974  840593 kubeadm.go:1081] duration metric: took 15.29393518s to wait for elevateKubeSystemPrivileges.
	I1114 14:53:26.714031  840593 kubeadm.go:406] StartCluster complete in 29.365770007s
	I1114 14:53:26.714055  840593 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:53:26.714273  840593 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:53:26.715083  840593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:53:26.715339  840593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 14:53:26.715451  840593 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 14:53:26.715535  840593 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-944535"
	I1114 14:53:26.715553  840593 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-944535"
	I1114 14:53:26.715567  840593 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-944535"
	I1114 14:53:26.715595  840593 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-944535"
	I1114 14:53:26.715570  840593 config.go:182] Loaded profile config "ingress-addon-legacy-944535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 14:53:26.715665  840593 host.go:66] Checking if "ingress-addon-legacy-944535" exists ...
	I1114 14:53:26.716195  840593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:53:26.716266  840593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:53:26.716213  840593 kapi.go:59] client config for ingress-addon-legacy-944535: &rest.Config{Host:"https://192.168.39.198:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:53:26.716198  840593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:53:26.716377  840593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:53:26.717123  840593 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 14:53:26.732770  840593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I1114 14:53:26.733090  840593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I1114 14:53:26.733304  840593 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:53:26.733561  840593 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:53:26.733825  840593 main.go:141] libmachine: Using API Version  1
	I1114 14:53:26.733850  840593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:53:26.734154  840593 main.go:141] libmachine: Using API Version  1
	I1114 14:53:26.734168  840593 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:53:26.734195  840593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:53:26.734542  840593 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:53:26.734714  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetState
	I1114 14:53:26.734794  840593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:53:26.734839  840593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:53:26.737351  840593 kapi.go:59] client config for ingress-addon-legacy-944535: &rest.Config{Host:"https://192.168.39.198:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:53:26.737607  840593 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-944535"
	I1114 14:53:26.737643  840593 host.go:66] Checking if "ingress-addon-legacy-944535" exists ...
	I1114 14:53:26.737923  840593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:53:26.737947  840593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:53:26.750736  840593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I1114 14:53:26.751287  840593 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:53:26.751884  840593 main.go:141] libmachine: Using API Version  1
	I1114 14:53:26.751911  840593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:53:26.752277  840593 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:53:26.752349  840593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I1114 14:53:26.752616  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetState
	I1114 14:53:26.752757  840593 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:53:26.753288  840593 main.go:141] libmachine: Using API Version  1
	I1114 14:53:26.753313  840593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:53:26.753814  840593 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:53:26.754492  840593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:53:26.754503  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:53:26.754523  840593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:53:26.756414  840593 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 14:53:26.757835  840593 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:53:26.757857  840593 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 14:53:26.757886  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:53:26.760844  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:53:26.761210  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:53:26.761241  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:53:26.761366  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:53:26.761528  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:53:26.761634  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:53:26.761728  840593 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa Username:docker}
	I1114 14:53:26.770389  840593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I1114 14:53:26.770853  840593 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:53:26.771403  840593 main.go:141] libmachine: Using API Version  1
	I1114 14:53:26.771431  840593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:53:26.771786  840593 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:53:26.771987  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetState
	I1114 14:53:26.773436  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .DriverName
	I1114 14:53:26.773755  840593 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 14:53:26.773776  840593 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 14:53:26.773802  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHHostname
	I1114 14:53:26.776668  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:53:26.777113  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ca:1e", ip: ""} in network mk-ingress-addon-legacy-944535: {Iface:virbr1 ExpiryTime:2023-11-14 15:52:40 +0000 UTC Type:0 Mac:52:54:00:7c:ca:1e Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ingress-addon-legacy-944535 Clientid:01:52:54:00:7c:ca:1e}
	I1114 14:53:26.777145  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | domain ingress-addon-legacy-944535 has defined IP address 192.168.39.198 and MAC address 52:54:00:7c:ca:1e in network mk-ingress-addon-legacy-944535
	I1114 14:53:26.777277  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHPort
	I1114 14:53:26.777463  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHKeyPath
	I1114 14:53:26.777641  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .GetSSHUsername
	I1114 14:53:26.777848  840593 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/ingress-addon-legacy-944535/id_rsa Username:docker}
	I1114 14:53:26.847013  840593 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-944535" context rescaled to 1 replicas
	I1114 14:53:26.847063  840593 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 14:53:26.848721  840593 out.go:177] * Verifying Kubernetes components...
	I1114 14:53:26.850206  840593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:53:26.942720  840593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 14:53:26.957870  840593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:53:27.013643  840593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 14:53:27.014367  840593 kapi.go:59] client config for ingress-addon-legacy-944535: &rest.Config{Host:"https://192.168.39.198:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:53:27.014760  840593 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-944535" to be "Ready" ...
	I1114 14:53:27.180418  840593 node_ready.go:49] node "ingress-addon-legacy-944535" has status "Ready":"True"
	I1114 14:53:27.180445  840593 node_ready.go:38] duration metric: took 165.643199ms waiting for node "ingress-addon-legacy-944535" to be "Ready" ...
	I1114 14:53:27.180470  840593 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:53:27.522837  840593 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-4tmqr" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:27.584372  840593 main.go:141] libmachine: Making call to close driver server
	I1114 14:53:27.584399  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .Close
	I1114 14:53:27.584729  840593 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:53:27.584757  840593 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:53:27.584767  840593 main.go:141] libmachine: Making call to close driver server
	I1114 14:53:27.584777  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .Close
	I1114 14:53:27.585009  840593 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:53:27.585031  840593 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:53:27.649242  840593 main.go:141] libmachine: Making call to close driver server
	I1114 14:53:27.649273  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .Close
	I1114 14:53:27.649600  840593 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:53:27.649657  840593 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:53:27.649697  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) DBG | Closing plugin on server side
	I1114 14:53:27.739693  840593 main.go:141] libmachine: Making call to close driver server
	I1114 14:53:27.739728  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .Close
	I1114 14:53:27.739739  840593 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 14:53:27.740102  840593 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:53:27.740142  840593 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:53:27.740169  840593 main.go:141] libmachine: Making call to close driver server
	I1114 14:53:27.740181  840593 main.go:141] libmachine: (ingress-addon-legacy-944535) Calling .Close
	I1114 14:53:27.740438  840593 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:53:27.740455  840593 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:53:27.742352  840593 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1114 14:53:27.743638  840593 addons.go:502] enable addons completed in 1.028191144s: enabled=[default-storageclass storage-provisioner]
	I1114 14:53:29.644145  840593 pod_ready.go:102] pod "coredns-66bff467f8-4tmqr" in "kube-system" namespace has status "Ready":"False"
	I1114 14:53:32.142470  840593 pod_ready.go:102] pod "coredns-66bff467f8-4tmqr" in "kube-system" namespace has status "Ready":"False"
	I1114 14:53:34.143718  840593 pod_ready.go:102] pod "coredns-66bff467f8-4tmqr" in "kube-system" namespace has status "Ready":"False"
	I1114 14:53:36.643162  840593 pod_ready.go:102] pod "coredns-66bff467f8-4tmqr" in "kube-system" namespace has status "Ready":"False"
	I1114 14:53:38.143794  840593 pod_ready.go:92] pod "coredns-66bff467f8-4tmqr" in "kube-system" namespace has status "Ready":"True"
	I1114 14:53:38.143828  840593 pod_ready.go:81] duration metric: took 10.620958926s waiting for pod "coredns-66bff467f8-4tmqr" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.143843  840593 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xkr5n" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.146188  840593 pod_ready.go:97] error getting pod "coredns-66bff467f8-xkr5n" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-xkr5n" not found
	I1114 14:53:38.146210  840593 pod_ready.go:81] duration metric: took 2.359269ms waiting for pod "coredns-66bff467f8-xkr5n" in "kube-system" namespace to be "Ready" ...
	E1114 14:53:38.146219  840593 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-xkr5n" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-xkr5n" not found
	I1114 14:53:38.146225  840593 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-944535" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.151903  840593 pod_ready.go:92] pod "etcd-ingress-addon-legacy-944535" in "kube-system" namespace has status "Ready":"True"
	I1114 14:53:38.151923  840593 pod_ready.go:81] duration metric: took 5.690183ms waiting for pod "etcd-ingress-addon-legacy-944535" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.151935  840593 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-944535" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.156792  840593 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-944535" in "kube-system" namespace has status "Ready":"True"
	I1114 14:53:38.156812  840593 pod_ready.go:81] duration metric: took 4.868237ms waiting for pod "kube-apiserver-ingress-addon-legacy-944535" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.156823  840593 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-944535" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.169492  840593 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-944535" in "kube-system" namespace has status "Ready":"True"
	I1114 14:53:38.169512  840593 pod_ready.go:81] duration metric: took 12.680736ms waiting for pod "kube-controller-manager-ingress-addon-legacy-944535" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.169525  840593 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bdrcm" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.337645  840593 request.go:629] Waited for 160.333224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.198:8443/api/v1/nodes/ingress-addon-legacy-944535
	I1114 14:53:38.341075  840593 pod_ready.go:92] pod "kube-proxy-bdrcm" in "kube-system" namespace has status "Ready":"True"
	I1114 14:53:38.341106  840593 pod_ready.go:81] duration metric: took 171.572073ms waiting for pod "kube-proxy-bdrcm" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.341119  840593 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-944535" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.537591  840593 request.go:629] Waited for 196.379741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.198:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-944535
	I1114 14:53:38.737575  840593 request.go:629] Waited for 195.400962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.198:8443/api/v1/nodes/ingress-addon-legacy-944535
	I1114 14:53:38.741060  840593 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-944535" in "kube-system" namespace has status "Ready":"True"
	I1114 14:53:38.741089  840593 pod_ready.go:81] duration metric: took 399.958349ms waiting for pod "kube-scheduler-ingress-addon-legacy-944535" in "kube-system" namespace to be "Ready" ...
	I1114 14:53:38.741100  840593 pod_ready.go:38] duration metric: took 11.560621011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:53:38.741120  840593 api_server.go:52] waiting for apiserver process to appear ...
	I1114 14:53:38.741186  840593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:53:38.753483  840593 api_server.go:72] duration metric: took 11.906377688s to wait for apiserver process to appear ...
	I1114 14:53:38.753510  840593 api_server.go:88] waiting for apiserver healthz status ...
	I1114 14:53:38.753530  840593 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1114 14:53:38.759810  840593 api_server.go:279] https://192.168.39.198:8443/healthz returned 200:
	ok
	I1114 14:53:38.761018  840593 api_server.go:141] control plane version: v1.18.20
	I1114 14:53:38.761047  840593 api_server.go:131] duration metric: took 7.529217ms to wait for apiserver health ...
	I1114 14:53:38.761057  840593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 14:53:38.937516  840593 request.go:629] Waited for 176.378224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.198:8443/api/v1/namespaces/kube-system/pods
	I1114 14:53:38.943763  840593 system_pods.go:59] 7 kube-system pods found
	I1114 14:53:38.943794  840593 system_pods.go:61] "coredns-66bff467f8-4tmqr" [9ae9566f-2b7c-4a2d-a851-24e8c015bedf] Running
	I1114 14:53:38.943799  840593 system_pods.go:61] "etcd-ingress-addon-legacy-944535" [75cef230-dfe8-40f0-9b94-f3767d947000] Running
	I1114 14:53:38.943803  840593 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-944535" [b9301885-6741-4f84-9829-2ece4441dfa9] Running
	I1114 14:53:38.943811  840593 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-944535" [b2e79d07-faab-4049-9853-7b65e0c9b300] Running
	I1114 14:53:38.943815  840593 system_pods.go:61] "kube-proxy-bdrcm" [6e16393c-b73d-41a3-b8e1-e70767a185d9] Running
	I1114 14:53:38.943819  840593 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-944535" [00b945cf-1ba3-4b7a-a3e5-65a286b9867d] Running
	I1114 14:53:38.943825  840593 system_pods.go:61] "storage-provisioner" [4a4c983c-d995-4e15-8ec8-231e18cdc507] Running
	I1114 14:53:38.943830  840593 system_pods.go:74] duration metric: took 182.767738ms to wait for pod list to return data ...
	I1114 14:53:38.943837  840593 default_sa.go:34] waiting for default service account to be created ...
	I1114 14:53:39.137287  840593 request.go:629] Waited for 193.370743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.198:8443/api/v1/namespaces/default/serviceaccounts
	I1114 14:53:39.140458  840593 default_sa.go:45] found service account: "default"
	I1114 14:53:39.140493  840593 default_sa.go:55] duration metric: took 196.639481ms for default service account to be created ...
	I1114 14:53:39.140502  840593 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 14:53:39.337921  840593 request.go:629] Waited for 197.339264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.198:8443/api/v1/namespaces/kube-system/pods
	I1114 14:53:39.343181  840593 system_pods.go:86] 7 kube-system pods found
	I1114 14:53:39.343210  840593 system_pods.go:89] "coredns-66bff467f8-4tmqr" [9ae9566f-2b7c-4a2d-a851-24e8c015bedf] Running
	I1114 14:53:39.343215  840593 system_pods.go:89] "etcd-ingress-addon-legacy-944535" [75cef230-dfe8-40f0-9b94-f3767d947000] Running
	I1114 14:53:39.343223  840593 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-944535" [b9301885-6741-4f84-9829-2ece4441dfa9] Running
	I1114 14:53:39.343227  840593 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-944535" [b2e79d07-faab-4049-9853-7b65e0c9b300] Running
	I1114 14:53:39.343230  840593 system_pods.go:89] "kube-proxy-bdrcm" [6e16393c-b73d-41a3-b8e1-e70767a185d9] Running
	I1114 14:53:39.343234  840593 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-944535" [00b945cf-1ba3-4b7a-a3e5-65a286b9867d] Running
	I1114 14:53:39.343241  840593 system_pods.go:89] "storage-provisioner" [4a4c983c-d995-4e15-8ec8-231e18cdc507] Running
	I1114 14:53:39.343249  840593 system_pods.go:126] duration metric: took 202.74126ms to wait for k8s-apps to be running ...
	I1114 14:53:39.343257  840593 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:53:39.343306  840593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:53:39.356224  840593 system_svc.go:56] duration metric: took 12.954324ms WaitForService to wait for kubelet.
	I1114 14:53:39.356257  840593 kubeadm.go:581] duration metric: took 12.509160884s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:53:39.356284  840593 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:53:39.537940  840593 request.go:629] Waited for 181.571041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.198:8443/api/v1/nodes
	I1114 14:53:39.541190  840593 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:53:39.541257  840593 node_conditions.go:123] node cpu capacity is 2
	I1114 14:53:39.541270  840593 node_conditions.go:105] duration metric: took 184.980635ms to run NodePressure ...
	I1114 14:53:39.541282  840593 start.go:228] waiting for startup goroutines ...
	I1114 14:53:39.541291  840593 start.go:233] waiting for cluster config update ...
	I1114 14:53:39.541301  840593 start.go:242] writing updated cluster config ...
	I1114 14:53:39.541579  840593 ssh_runner.go:195] Run: rm -f paused
	I1114 14:53:39.593489  840593 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1114 14:53:39.595353  840593 out.go:177] 
	W1114 14:53:39.596936  840593 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1114 14:53:39.598598  840593 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1114 14:53:39.600147  840593 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-944535" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 14:52:36 UTC, ends at Tue 2023-11-14 14:56:38 UTC. --
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.468874876Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699973798468859199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=077779ec-a0d1-4cda-8f18-3437755ea98d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.469643555Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb808a22-504a-4b97-8b58-f974c72843d9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.469698461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb808a22-504a-4b97-8b58-f974c72843d9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.470063011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:534c5582ede460d7572b2e811ceb4cab0d532b51d1a9ebc6f7f8623e6f7fe0dd,PodSandboxId:25d2dab5703bb566efbd563cea85081c009d71bf19bc0a7132b8263858543815,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699973790488684426,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kjjnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd4ad15a-d225-48b2-b818-f51aedab0001,},Annotations:map[string]string{io.kubernetes.container.hash: 4c35ec44,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9233e970fe793043143d165d68fcf8c3b511d17d8a16d774c00da823175d852,PodSandboxId:f1c8d78e2235052980074d17b48ba2dcd3422a08e596f518687db328a7467a09,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699973647269370486,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0772485-7b80-43e3-95b2-80b72f90f329,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 38bd0828,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b166d238c2bf79373c76bdf3c7a409ce2dbfc878c119c3a42592ad5ec5a8e5,PodSandboxId:352fb3f2b9e333b3ef2e7a20bbefc67a3cb0ad03b86911e1a5ba45cf0ad5bca5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1699973631619716442,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-chnd4,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 893e2e76-4005-4ef1-9977-41f2abeae790,},Annotations:map[string]string{io.kubernetes.container.hash: e1692ca7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b54b4afaa3087c761a4be90866443fda4ed28b2441e488802912172becf770a,PodSandboxId:ee58811b074b33443a1f826b8a2bbe2f429d4f84c586b71a02b534bd67057231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699973622747897640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-84m9m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4b72560-645e-44c0-864a-090aee036b30,},Annotations:map[string]string{io.kubernetes.container.hash: 34b85f72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c766937d3caa46863182ca07098d47e1a5ada99263c6bb117b5ba55994665f0,PodSandboxId:5ab3ba2cb63b627deb599a6a5152e301150e83f5e4faf488208643c124e79551,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699973622619190263,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-72d2l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de6ff4be-dce8-4570-a1e1-cb51e794b57e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9b6952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72da24225d420e20e8fa8d653062c4f0645dbc9dec5c4d9c53448ed4157356bd,PodSandboxId:4037b8b7293f9693781b86324923d0c62152d3d5dcb890b752858f44f344fe4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1699973609027546327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-4tmqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ae9566f-2b7c-4a2d-a851-24e8c015bedf,},Annotations:map[string]string{io.kubernetes.container.hash: 33fb66d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688702d3e69d87fb7b481182507
ccab87927fc286c4851e071482f45b8ff5854,PodSandboxId:6f9f12ef804670df6f92486b07fcdb830c49b0ba5d88b546ce81e2b770f92e0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699973608528900705,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4c983c-d995-4e15-8ec8-231e18cdc507,},Annotations:map[string]string{io.kubernetes.container.hash: 4b1c9d0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35b793e52231b48c5650bc1b8d1
c43d65bb9ca7dec3028ae397e7882ea66501,PodSandboxId:867e783527b57fd9bc8f1fd8d43482b6f66764504e8c5ae28dc67a6b76b7920d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1699973608068467086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bdrcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e16393c-b73d-41a3-b8e1-e70767a185d9,},Annotations:map[string]string{io.kubernetes.container.hash: dacb2ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6024017908c3ee7b3b073ca0a55f6a38b7305df2cb166fc8caaced7c25b215db,Pod
SandboxId:921cf9e900492223c623dca7db79be4ffbc50cb86d892832f0115edc7501adef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1699973584277661859,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8b360cd01d40a99a5792ee68d699c7,},Annotations:map[string]string{io.kubernetes.container.hash: f6fa8f8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d5f272969854b69ebfcb8d68060e8b91bb6666a5c533d938b8f7721785b3983,PodSandboxId:6696d415ba9d372daaf19b72b9ceb38557e7
1d5929c2d4bee58d20446edcbae8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1699973582983733360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c493ed9cbc8206d7b282509c95c3e6966d67ae12d7ab3cf9f577742ae5dc85,PodSandboxId:d1e8fa7d07a60ffec754a705d6bd9138c7ed0d27ee
467c77494c2faa38f0f68f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1699973582812868224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63963a11b048da4c0a0b4e98a0f17e8016e053dd7e7ab6e77166b9c1c6c4728a,PodSandboxId:fa3aae4b1d42
4eafd08f3e4bbb09bc332a81508d34de18876f89e867a4111608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1699973582688741167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e7583e99913b77ec1c934f15e87bb3,},Annotations:map[string]string{io.kubernetes.container.hash: 27c149b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb808a22-504a-4b97-8b58-f974c72843d9 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.516089587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=52b1581d-885b-4dd7-96dc-25969206d1d0 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.516151996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=52b1581d-885b-4dd7-96dc-25969206d1d0 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.517390463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a7ca01ce-4b9c-41b2-b04e-c55cc1625bf3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.517974804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699973798517901746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=a7ca01ce-4b9c-41b2-b04e-c55cc1625bf3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.518480535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=36367f37-253f-4b37-a44a-4f7a6d22c283 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.518531197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=36367f37-253f-4b37-a44a-4f7a6d22c283 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.518784728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:534c5582ede460d7572b2e811ceb4cab0d532b51d1a9ebc6f7f8623e6f7fe0dd,PodSandboxId:25d2dab5703bb566efbd563cea85081c009d71bf19bc0a7132b8263858543815,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699973790488684426,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kjjnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd4ad15a-d225-48b2-b818-f51aedab0001,},Annotations:map[string]string{io.kubernetes.container.hash: 4c35ec44,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9233e970fe793043143d165d68fcf8c3b511d17d8a16d774c00da823175d852,PodSandboxId:f1c8d78e2235052980074d17b48ba2dcd3422a08e596f518687db328a7467a09,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699973647269370486,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0772485-7b80-43e3-95b2-80b72f90f329,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 38bd0828,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b166d238c2bf79373c76bdf3c7a409ce2dbfc878c119c3a42592ad5ec5a8e5,PodSandboxId:352fb3f2b9e333b3ef2e7a20bbefc67a3cb0ad03b86911e1a5ba45cf0ad5bca5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1699973631619716442,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-chnd4,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 893e2e76-4005-4ef1-9977-41f2abeae790,},Annotations:map[string]string{io.kubernetes.container.hash: e1692ca7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b54b4afaa3087c761a4be90866443fda4ed28b2441e488802912172becf770a,PodSandboxId:ee58811b074b33443a1f826b8a2bbe2f429d4f84c586b71a02b534bd67057231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699973622747897640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-84m9m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4b72560-645e-44c0-864a-090aee036b30,},Annotations:map[string]string{io.kubernetes.container.hash: 34b85f72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c766937d3caa46863182ca07098d47e1a5ada99263c6bb117b5ba55994665f0,PodSandboxId:5ab3ba2cb63b627deb599a6a5152e301150e83f5e4faf488208643c124e79551,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699973622619190263,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-72d2l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de6ff4be-dce8-4570-a1e1-cb51e794b57e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9b6952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72da24225d420e20e8fa8d653062c4f0645dbc9dec5c4d9c53448ed4157356bd,PodSandboxId:4037b8b7293f9693781b86324923d0c62152d3d5dcb890b752858f44f344fe4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1699973609027546327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-4tmqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ae9566f-2b7c-4a2d-a851-24e8c015bedf,},Annotations:map[string]string{io.kubernetes.container.hash: 33fb66d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688702d3e69d87fb7b481182507
ccab87927fc286c4851e071482f45b8ff5854,PodSandboxId:6f9f12ef804670df6f92486b07fcdb830c49b0ba5d88b546ce81e2b770f92e0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699973608528900705,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4c983c-d995-4e15-8ec8-231e18cdc507,},Annotations:map[string]string{io.kubernetes.container.hash: 4b1c9d0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35b793e52231b48c5650bc1b8d1
c43d65bb9ca7dec3028ae397e7882ea66501,PodSandboxId:867e783527b57fd9bc8f1fd8d43482b6f66764504e8c5ae28dc67a6b76b7920d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1699973608068467086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bdrcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e16393c-b73d-41a3-b8e1-e70767a185d9,},Annotations:map[string]string{io.kubernetes.container.hash: dacb2ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6024017908c3ee7b3b073ca0a55f6a38b7305df2cb166fc8caaced7c25b215db,Pod
SandboxId:921cf9e900492223c623dca7db79be4ffbc50cb86d892832f0115edc7501adef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1699973584277661859,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8b360cd01d40a99a5792ee68d699c7,},Annotations:map[string]string{io.kubernetes.container.hash: f6fa8f8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d5f272969854b69ebfcb8d68060e8b91bb6666a5c533d938b8f7721785b3983,PodSandboxId:6696d415ba9d372daaf19b72b9ceb38557e7
1d5929c2d4bee58d20446edcbae8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1699973582983733360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c493ed9cbc8206d7b282509c95c3e6966d67ae12d7ab3cf9f577742ae5dc85,PodSandboxId:d1e8fa7d07a60ffec754a705d6bd9138c7ed0d27ee
467c77494c2faa38f0f68f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1699973582812868224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63963a11b048da4c0a0b4e98a0f17e8016e053dd7e7ab6e77166b9c1c6c4728a,PodSandboxId:fa3aae4b1d42
4eafd08f3e4bbb09bc332a81508d34de18876f89e867a4111608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1699973582688741167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e7583e99913b77ec1c934f15e87bb3,},Annotations:map[string]string{io.kubernetes.container.hash: 27c149b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=36367f37-253f-4b37-a44a-4f7a6d22c283 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.557751221Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7e9ad8f2-849e-4b3e-ab31-ec1905c15be4 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.557807583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7e9ad8f2-849e-4b3e-ab31-ec1905c15be4 name=/runtime.v1.RuntimeService/Version
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.559029913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cb5daeb3-a06d-470e-a453-522162bf9c1e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.559474082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699973798559461089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=cb5daeb3-a06d-470e-a453-522162bf9c1e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.559871541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=85e8721f-f66d-4c97-a6f1-c78f3bedf204 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.559984436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=85e8721f-f66d-4c97-a6f1-c78f3bedf204 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.560227692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:534c5582ede460d7572b2e811ceb4cab0d532b51d1a9ebc6f7f8623e6f7fe0dd,PodSandboxId:25d2dab5703bb566efbd563cea85081c009d71bf19bc0a7132b8263858543815,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699973790488684426,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kjjnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd4ad15a-d225-48b2-b818-f51aedab0001,},Annotations:map[string]string{io.kubernetes.container.hash: 4c35ec44,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9233e970fe793043143d165d68fcf8c3b511d17d8a16d774c00da823175d852,PodSandboxId:f1c8d78e2235052980074d17b48ba2dcd3422a08e596f518687db328a7467a09,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699973647269370486,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0772485-7b80-43e3-95b2-80b72f90f329,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 38bd0828,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b166d238c2bf79373c76bdf3c7a409ce2dbfc878c119c3a42592ad5ec5a8e5,PodSandboxId:352fb3f2b9e333b3ef2e7a20bbefc67a3cb0ad03b86911e1a5ba45cf0ad5bca5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1699973631619716442,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-chnd4,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 893e2e76-4005-4ef1-9977-41f2abeae790,},Annotations:map[string]string{io.kubernetes.container.hash: e1692ca7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b54b4afaa3087c761a4be90866443fda4ed28b2441e488802912172becf770a,PodSandboxId:ee58811b074b33443a1f826b8a2bbe2f429d4f84c586b71a02b534bd67057231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699973622747897640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-84m9m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4b72560-645e-44c0-864a-090aee036b30,},Annotations:map[string]string{io.kubernetes.container.hash: 34b85f72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c766937d3caa46863182ca07098d47e1a5ada99263c6bb117b5ba55994665f0,PodSandboxId:5ab3ba2cb63b627deb599a6a5152e301150e83f5e4faf488208643c124e79551,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699973622619190263,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-72d2l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de6ff4be-dce8-4570-a1e1-cb51e794b57e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9b6952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72da24225d420e20e8fa8d653062c4f0645dbc9dec5c4d9c53448ed4157356bd,PodSandboxId:4037b8b7293f9693781b86324923d0c62152d3d5dcb890b752858f44f344fe4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1699973609027546327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-4tmqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ae9566f-2b7c-4a2d-a851-24e8c015bedf,},Annotations:map[string]string{io.kubernetes.container.hash: 33fb66d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688702d3e69d87fb7b481182507
ccab87927fc286c4851e071482f45b8ff5854,PodSandboxId:6f9f12ef804670df6f92486b07fcdb830c49b0ba5d88b546ce81e2b770f92e0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699973608528900705,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4c983c-d995-4e15-8ec8-231e18cdc507,},Annotations:map[string]string{io.kubernetes.container.hash: 4b1c9d0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35b793e52231b48c5650bc1b8d1
c43d65bb9ca7dec3028ae397e7882ea66501,PodSandboxId:867e783527b57fd9bc8f1fd8d43482b6f66764504e8c5ae28dc67a6b76b7920d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1699973608068467086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bdrcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e16393c-b73d-41a3-b8e1-e70767a185d9,},Annotations:map[string]string{io.kubernetes.container.hash: dacb2ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6024017908c3ee7b3b073ca0a55f6a38b7305df2cb166fc8caaced7c25b215db,Pod
SandboxId:921cf9e900492223c623dca7db79be4ffbc50cb86d892832f0115edc7501adef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1699973584277661859,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8b360cd01d40a99a5792ee68d699c7,},Annotations:map[string]string{io.kubernetes.container.hash: f6fa8f8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d5f272969854b69ebfcb8d68060e8b91bb6666a5c533d938b8f7721785b3983,PodSandboxId:6696d415ba9d372daaf19b72b9ceb38557e7
1d5929c2d4bee58d20446edcbae8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1699973582983733360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c493ed9cbc8206d7b282509c95c3e6966d67ae12d7ab3cf9f577742ae5dc85,PodSandboxId:d1e8fa7d07a60ffec754a705d6bd9138c7ed0d27ee
467c77494c2faa38f0f68f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1699973582812868224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63963a11b048da4c0a0b4e98a0f17e8016e053dd7e7ab6e77166b9c1c6c4728a,PodSandboxId:fa3aae4b1d42
4eafd08f3e4bbb09bc332a81508d34de18876f89e867a4111608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1699973582688741167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e7583e99913b77ec1c934f15e87bb3,},Annotations:map[string]string{io.kubernetes.container.hash: 27c149b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=85e8721f-f66d-4c97-a6f1-c78f3bedf204 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.593702105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e9f787cb-96f2-43a5-9d13-dcd0c06cbf8d name=/runtime.v1.RuntimeService/Version
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.593754088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e9f787cb-96f2-43a5-9d13-dcd0c06cbf8d name=/runtime.v1.RuntimeService/Version
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.594731880Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=51ae5011-f5b2-43a1-884c-634ff77e6c70 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.595306622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699973798595292352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=51ae5011-f5b2-43a1-884c-634ff77e6c70 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.596030196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=24edd338-e5c7-4aa2-a4e3-38f0af45471c name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.596104298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=24edd338-e5c7-4aa2-a4e3-38f0af45471c name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 14:56:38 ingress-addon-legacy-944535 crio[720]: time="2023-11-14 14:56:38.596386325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:534c5582ede460d7572b2e811ceb4cab0d532b51d1a9ebc6f7f8623e6f7fe0dd,PodSandboxId:25d2dab5703bb566efbd563cea85081c009d71bf19bc0a7132b8263858543815,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699973790488684426,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kjjnh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cd4ad15a-d225-48b2-b818-f51aedab0001,},Annotations:map[string]string{io.kubernetes.container.hash: 4c35ec44,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9233e970fe793043143d165d68fcf8c3b511d17d8a16d774c00da823175d852,PodSandboxId:f1c8d78e2235052980074d17b48ba2dcd3422a08e596f518687db328a7467a09,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699973647269370486,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0772485-7b80-43e3-95b2-80b72f90f329,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 38bd0828,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b166d238c2bf79373c76bdf3c7a409ce2dbfc878c119c3a42592ad5ec5a8e5,PodSandboxId:352fb3f2b9e333b3ef2e7a20bbefc67a3cb0ad03b86911e1a5ba45cf0ad5bca5,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1699973631619716442,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-chnd4,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 893e2e76-4005-4ef1-9977-41f2abeae790,},Annotations:map[string]string{io.kubernetes.container.hash: e1692ca7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b54b4afaa3087c761a4be90866443fda4ed28b2441e488802912172becf770a,PodSandboxId:ee58811b074b33443a1f826b8a2bbe2f429d4f84c586b71a02b534bd67057231,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699973622747897640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-84m9m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4b72560-645e-44c0-864a-090aee036b30,},Annotations:map[string]string{io.kubernetes.container.hash: 34b85f72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c766937d3caa46863182ca07098d47e1a5ada99263c6bb117b5ba55994665f0,PodSandboxId:5ab3ba2cb63b627deb599a6a5152e301150e83f5e4faf488208643c124e79551,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699973622619190263,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-72d2l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de6ff4be-dce8-4570-a1e1-cb51e794b57e,},Annotations:map[string]string{io.kubernetes.container.hash: 3e9b6952,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72da24225d420e20e8fa8d653062c4f0645dbc9dec5c4d9c53448ed4157356bd,PodSandboxId:4037b8b7293f9693781b86324923d0c62152d3d5dcb890b752858f44f344fe4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1699973609027546327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-4tmqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ae9566f-2b7c-4a2d-a851-24e8c015bedf,},Annotations:map[string]string{io.kubernetes.container.hash: 33fb66d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688702d3e69d87fb7b481182507
ccab87927fc286c4851e071482f45b8ff5854,PodSandboxId:6f9f12ef804670df6f92486b07fcdb830c49b0ba5d88b546ce81e2b770f92e0b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699973608528900705,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4c983c-d995-4e15-8ec8-231e18cdc507,},Annotations:map[string]string{io.kubernetes.container.hash: 4b1c9d0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35b793e52231b48c5650bc1b8d1
c43d65bb9ca7dec3028ae397e7882ea66501,PodSandboxId:867e783527b57fd9bc8f1fd8d43482b6f66764504e8c5ae28dc67a6b76b7920d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1699973608068467086,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bdrcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e16393c-b73d-41a3-b8e1-e70767a185d9,},Annotations:map[string]string{io.kubernetes.container.hash: dacb2ac3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6024017908c3ee7b3b073ca0a55f6a38b7305df2cb166fc8caaced7c25b215db,Pod
SandboxId:921cf9e900492223c623dca7db79be4ffbc50cb86d892832f0115edc7501adef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1699973584277661859,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8b360cd01d40a99a5792ee68d699c7,},Annotations:map[string]string{io.kubernetes.container.hash: f6fa8f8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d5f272969854b69ebfcb8d68060e8b91bb6666a5c533d938b8f7721785b3983,PodSandboxId:6696d415ba9d372daaf19b72b9ceb38557e7
1d5929c2d4bee58d20446edcbae8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1699973582983733360,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c493ed9cbc8206d7b282509c95c3e6966d67ae12d7ab3cf9f577742ae5dc85,PodSandboxId:d1e8fa7d07a60ffec754a705d6bd9138c7ed0d27ee
467c77494c2faa38f0f68f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1699973582812868224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63963a11b048da4c0a0b4e98a0f17e8016e053dd7e7ab6e77166b9c1c6c4728a,PodSandboxId:fa3aae4b1d42
4eafd08f3e4bbb09bc332a81508d34de18876f89e867a4111608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1699973582688741167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-944535,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e7583e99913b77ec1c934f15e87bb3,},Annotations:map[string]string{io.kubernetes.container.hash: 27c149b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=24edd338-e5c7-4aa2-a4e3-38f0af45471c name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	534c5582ede46       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            8 seconds ago       Running             hello-world-app           0                   25d2dab5703bb       hello-world-app-5f5d8b66bb-kjjnh
	a9233e970fe79       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   f1c8d78e22350       nginx
	02b166d238c2b       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   352fb3f2b9e33       ingress-nginx-controller-7fcf777cb7-chnd4
	0b54b4afaa308       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   ee58811b074b3       ingress-nginx-admission-patch-84m9m
	4c766937d3caa       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   5ab3ba2cb63b6       ingress-nginx-admission-create-72d2l
	72da24225d420       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   4037b8b7293f9       coredns-66bff467f8-4tmqr
	688702d3e69d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   6f9f12ef80467       storage-provisioner
	d35b793e52231       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   867e783527b57       kube-proxy-bdrcm
	6024017908c3e       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   921cf9e900492       etcd-ingress-addon-legacy-944535
	7d5f272969854       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   6696d415ba9d3       kube-scheduler-ingress-addon-legacy-944535
	a9c493ed9cbc8       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   d1e8fa7d07a60       kube-controller-manager-ingress-addon-legacy-944535
	63963a11b048d       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   fa3aae4b1d424       kube-apiserver-ingress-addon-legacy-944535
	
	* 
	* ==> coredns [72da24225d420e20e8fa8d653062c4f0645dbc9dec5c4d9c53448ed4157356bd] <==
	* [INFO] 10.244.0.6:38634 - 17018 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000174977s
	[INFO] 10.244.0.6:57810 - 20131 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006079s
	[INFO] 10.244.0.6:38634 - 32651 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0005709s
	[INFO] 10.244.0.6:38634 - 14945 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000113319s
	[INFO] 10.244.0.6:38634 - 25426 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000154548s
	[INFO] 10.244.0.6:57810 - 19177 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000031879s
	[INFO] 10.244.0.6:57810 - 51235 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026501s
	[INFO] 10.244.0.6:57810 - 28423 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00011353s
	[INFO] 10.244.0.6:57810 - 15488 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027953s
	[INFO] 10.244.0.6:57810 - 20523 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002197s
	[INFO] 10.244.0.6:57810 - 36943 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034829s
	[INFO] 10.244.0.6:36463 - 56770 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119888s
	[INFO] 10.244.0.6:58370 - 27490 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047706s
	[INFO] 10.244.0.6:36463 - 21817 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080132s
	[INFO] 10.244.0.6:58370 - 9530 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000185331s
	[INFO] 10.244.0.6:58370 - 11552 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043232s
	[INFO] 10.244.0.6:36463 - 6419 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050165s
	[INFO] 10.244.0.6:36463 - 36963 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043653s
	[INFO] 10.244.0.6:58370 - 44875 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000083032s
	[INFO] 10.244.0.6:36463 - 35902 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087471s
	[INFO] 10.244.0.6:58370 - 17379 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037139s
	[INFO] 10.244.0.6:58370 - 18610 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034945s
	[INFO] 10.244.0.6:36463 - 16781 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008326s
	[INFO] 10.244.0.6:36463 - 42397 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041618s
	[INFO] 10.244.0.6:58370 - 10352 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000031909s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-944535
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-944535
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=ingress-addon-legacy-944535
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T14_53_11_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 14:53:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-944535
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 14:56:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:54:21 +0000   Tue, 14 Nov 2023 14:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:54:21 +0000   Tue, 14 Nov 2023 14:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:54:21 +0000   Tue, 14 Nov 2023 14:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:54:21 +0000   Tue, 14 Nov 2023 14:53:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ingress-addon-legacy-944535
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 879c37b92314494bb6bc9c2ee6c9c560
	  System UUID:                879c37b9-2314-494b-b6bc-9c2ee6c9c560
	  Boot ID:                    137e7d5e-6bf3-43b8-a0d9-b3da33cb9c97
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-kjjnh                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 coredns-66bff467f8-4tmqr                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m12s
	  kube-system                 etcd-ingress-addon-legacy-944535                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-apiserver-ingress-addon-legacy-944535             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-944535    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-proxy-bdrcm                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kube-scheduler-ingress-addon-legacy-944535             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m37s (x5 over 3m37s)  kubelet     Node ingress-addon-legacy-944535 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m37s (x5 over 3m37s)  kubelet     Node ingress-addon-legacy-944535 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m37s (x5 over 3m37s)  kubelet     Node ingress-addon-legacy-944535 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m27s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m27s                  kubelet     Node ingress-addon-legacy-944535 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s                  kubelet     Node ingress-addon-legacy-944535 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s                  kubelet     Node ingress-addon-legacy-944535 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m17s                  kubelet     Node ingress-addon-legacy-944535 status is now: NodeReady
	  Normal  Starting                 3m10s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov14 14:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093257] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.381388] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.401540] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151523] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.988406] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.046892] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.108773] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.135235] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.097470] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.205762] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +7.757426] systemd-fstab-generator[1028]: Ignoring "noauto" for root device
	[Nov14 14:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.635957] systemd-fstab-generator[1435]: Ignoring "noauto" for root device
	[ +17.304604] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.497567] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.058714] kauditd_printk_skb: 10 callbacks suppressed
	[Nov14 14:54] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.726757] kauditd_printk_skb: 3 callbacks suppressed
	[Nov14 14:56] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [6024017908c3ee7b3b073ca0a55f6a38b7305df2cb166fc8caaced7c25b215db] <==
	* raft2023/11/14 14:53:04 INFO: f1d2ab5330a2a0e3 became follower at term 1
	raft2023/11/14 14:53:04 INFO: f1d2ab5330a2a0e3 switched to configuration voters=(17425178282036469987)
	2023-11-14 14:53:04.397182 W | auth: simple token is not cryptographically signed
	2023-11-14 14:53:04.401363 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-14 14:53:04.404038 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-14 14:53:04.404177 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-14 14:53:04.404225 I | embed: listening for peers on 192.168.39.198:2380
	2023-11-14 14:53:04.404282 I | etcdserver: f1d2ab5330a2a0e3 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/14 14:53:04 INFO: f1d2ab5330a2a0e3 switched to configuration voters=(17425178282036469987)
	2023-11-14 14:53:04.404515 I | etcdserver/membership: added member f1d2ab5330a2a0e3 [https://192.168.39.198:2380] to cluster 9fb372ad12afeb1b
	raft2023/11/14 14:53:04 INFO: f1d2ab5330a2a0e3 is starting a new election at term 1
	raft2023/11/14 14:53:04 INFO: f1d2ab5330a2a0e3 became candidate at term 2
	raft2023/11/14 14:53:04 INFO: f1d2ab5330a2a0e3 received MsgVoteResp from f1d2ab5330a2a0e3 at term 2
	raft2023/11/14 14:53:04 INFO: f1d2ab5330a2a0e3 became leader at term 2
	raft2023/11/14 14:53:04 INFO: raft.node: f1d2ab5330a2a0e3 elected leader f1d2ab5330a2a0e3 at term 2
	2023-11-14 14:53:04.990029 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-14 14:53:04.991758 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-14 14:53:04.992438 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-14 14:53:04.992688 I | etcdserver: published {Name:ingress-addon-legacy-944535 ClientURLs:[https://192.168.39.198:2379]} to cluster 9fb372ad12afeb1b
	2023-11-14 14:53:04.992733 I | embed: ready to serve client requests
	2023-11-14 14:53:04.993060 I | embed: ready to serve client requests
	2023-11-14 14:53:04.993777 I | embed: serving client requests on 192.168.39.198:2379
	2023-11-14 14:53:04.995982 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-14 14:53:26.505616 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/service-account-controller\" " with result "range_response_count:1 size:220" took too long (481.420194ms) to execute
	2023-11-14 14:53:26.506016 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (405.266708ms) to execute
	
	* 
	* ==> kernel <==
	*  14:56:38 up 4 min,  0 users,  load average: 1.13, 0.57, 0.24
	Linux ingress-addon-legacy-944535 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [63963a11b048da4c0a0b4e98a0f17e8016e053dd7e7ab6e77166b9c1c6c4728a] <==
	* I1114 14:53:07.915311       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1114 14:53:07.927101       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.198, ResourceVersion: 0, AdditionalErrorMsg: 
	I1114 14:53:08.016812       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1114 14:53:08.019348       1 cache.go:39] Caches are synced for autoregister controller
	I1114 14:53:08.019687       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 14:53:08.019743       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 14:53:08.019765       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1114 14:53:08.910123       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1114 14:53:08.910216       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1114 14:53:08.918202       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1114 14:53:08.925550       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1114 14:53:08.925655       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1114 14:53:09.377668       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 14:53:09.418622       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1114 14:53:09.485360       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.198]
	I1114 14:53:09.486126       1 controller.go:609] quota admission added evaluator for: endpoints
	I1114 14:53:09.489778       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 14:53:10.262783       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1114 14:53:11.225819       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1114 14:53:11.363382       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1114 14:53:11.732430       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 14:53:26.669703       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1114 14:53:26.814477       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1114 14:53:40.387627       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1114 14:54:02.914346       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [a9c493ed9cbc8206d7b282509c95c3e6966d67ae12d7ab3cf9f577742ae5dc85] <==
	* E1114 14:53:26.809048       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I1114 14:53:26.849839       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8c37a771-beb5-4862-8ac4-91613d8af5e9", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-4tmqr
	I1114 14:53:26.859721       1 shared_informer.go:230] Caches are synced for disruption 
	I1114 14:53:26.859766       1 disruption.go:339] Sending events to api server.
	I1114 14:53:26.882161       1 shared_informer.go:230] Caches are synced for stateful set 
	I1114 14:53:26.897142       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1114 14:53:27.009031       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"0f7d0b11-4f2d-4340-98c9-5813bdb24cdb", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1114 14:53:27.014090       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"92df6dc4-6340-4f6a-8639-08d2e325f0ee", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-bdrcm
	I1114 14:53:27.077466       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1114 14:53:27.163092       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1114 14:53:27.163244       1 shared_informer.go:230] Caches are synced for resource quota 
	I1114 14:53:27.177621       1 shared_informer.go:230] Caches are synced for endpoint 
	I1114 14:53:27.209315       1 shared_informer.go:230] Caches are synced for attach detach 
	I1114 14:53:27.212046       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1114 14:53:27.212120       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1114 14:53:27.226523       1 shared_informer.go:230] Caches are synced for resource quota 
	I1114 14:53:27.529872       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8c37a771-beb5-4862-8ac4-91613d8af5e9", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-xkr5n
	I1114 14:53:40.375326       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"37562d0d-7af5-4a3a-a00d-a4374d984a2d", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1114 14:53:40.409519       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"83b31188-ce2a-43b8-bbbe-696968ad4cea", APIVersion:"apps/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-chnd4
	I1114 14:53:40.409591       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"fb872854-46ba-4dc8-8c5f-8d7114c4bfb9", APIVersion:"batch/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-72d2l
	I1114 14:53:40.496478       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d2e05940-baf3-4e81-a668-68715abfd6de", APIVersion:"batch/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-84m9m
	I1114 14:53:43.918175       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d2e05940-baf3-4e81-a668-68715abfd6de", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1114 14:53:43.946620       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"fb872854-46ba-4dc8-8c5f-8d7114c4bfb9", APIVersion:"batch/v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1114 14:56:27.303767       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b126f969-6e64-4695-89fd-599476fba8f6", APIVersion:"apps/v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1114 14:56:27.313857       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"bfc37da8-7b21-4a9d-8b92-f598fc95c7fa", APIVersion:"apps/v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-kjjnh
	
	* 
	* ==> kube-proxy [d35b793e52231b48c5650bc1b8d1c43d65bb9ca7dec3028ae397e7882ea66501] <==
	* W1114 14:53:28.347329       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1114 14:53:28.360503       1 node.go:136] Successfully retrieved node IP: 192.168.39.198
	I1114 14:53:28.360571       1 server_others.go:186] Using iptables Proxier.
	I1114 14:53:28.363092       1 server.go:583] Version: v1.18.20
	I1114 14:53:28.368430       1 config.go:315] Starting service config controller
	I1114 14:53:28.372349       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1114 14:53:28.369109       1 config.go:133] Starting endpoints config controller
	I1114 14:53:28.374559       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1114 14:53:28.476125       1 shared_informer.go:230] Caches are synced for service config 
	I1114 14:53:28.476497       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [7d5f272969854b69ebfcb8d68060e8b91bb6666a5c533d938b8f7721785b3983] <==
	* I1114 14:53:08.035546       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1114 14:53:08.044731       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 14:53:08.044876       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 14:53:08.045872       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1114 14:53:08.046078       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1114 14:53:08.053888       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 14:53:08.054078       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 14:53:08.054226       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 14:53:08.054298       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 14:53:08.054324       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 14:53:08.054640       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 14:53:08.054757       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 14:53:08.055140       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 14:53:08.055233       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:53:08.055312       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 14:53:08.055464       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 14:53:08.055547       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 14:53:08.922540       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 14:53:08.962342       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 14:53:08.970847       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 14:53:09.024500       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 14:53:09.033533       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:53:09.104687       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 14:53:09.217776       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1114 14:53:09.645556       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 14:52:36 UTC, ends at Tue 2023-11-14 14:56:39 UTC. --
	Nov 14 14:53:52 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:53:52.833396    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-746nm" (UniqueName: "kubernetes.io/secret/cf819737-d24e-4819-8330-9bd28f93bbde-minikube-ingress-dns-token-746nm") pod "kube-ingress-dns-minikube" (UID: "cf819737-d24e-4819-8330-9bd28f93bbde")
	Nov 14 14:54:03 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:54:03.107470    1442 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 14 14:54:03 ingress-addon-legacy-944535 kubelet[1442]: E1114 14:54:03.109386    1442 reflector.go:178] object-"default"/"default-token-lbvrm": Failed to list *v1.Secret: secrets "default-token-lbvrm" is forbidden: User "system:node:ingress-addon-legacy-944535" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "ingress-addon-legacy-944535" and this object
	Nov 14 14:54:03 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:54:03.267752    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-lbvrm" (UniqueName: "kubernetes.io/secret/c0772485-7b80-43e3-95b2-80b72f90f329-default-token-lbvrm") pod "nginx" (UID: "c0772485-7b80-43e3-95b2-80b72f90f329")
	Nov 14 14:54:04 ingress-addon-legacy-944535 kubelet[1442]: E1114 14:54:04.368478    1442 secret.go:195] Couldn't get secret default/default-token-lbvrm: failed to sync secret cache: timed out waiting for the condition
	Nov 14 14:54:04 ingress-addon-legacy-944535 kubelet[1442]: E1114 14:54:04.368644    1442 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/c0772485-7b80-43e3-95b2-80b72f90f329-default-token-lbvrm podName:c0772485-7b80-43e3-95b2-80b72f90f329 nodeName:}" failed. No retries permitted until 2023-11-14 14:54:04.868621505 +0000 UTC m=+53.694406490 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-lbvrm\" (UniqueName: \"kubernetes.io/secret/c0772485-7b80-43e3-95b2-80b72f90f329-default-token-lbvrm\") pod \"nginx\" (UID: \"c0772485-7b80-43e3-95b2-80b72f90f329\") : failed to sync secret cache: timed out waiting for the condition"
	Nov 14 14:56:27 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:27.341120    1442 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 14 14:56:27 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:27.445004    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-lbvrm" (UniqueName: "kubernetes.io/secret/cd4ad15a-d225-48b2-b818-f51aedab0001-default-token-lbvrm") pod "hello-world-app-5f5d8b66bb-kjjnh" (UID: "cd4ad15a-d225-48b2-b818-f51aedab0001")
	Nov 14 14:56:28 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:28.927995    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3eef3a1a13bce5db6c493114f4a57a1e3dad2d357520f0b53d93c6c6ebfa759a
	Nov 14 14:56:29 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:29.051670    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-746nm" (UniqueName: "kubernetes.io/secret/cf819737-d24e-4819-8330-9bd28f93bbde-minikube-ingress-dns-token-746nm") pod "cf819737-d24e-4819-8330-9bd28f93bbde" (UID: "cf819737-d24e-4819-8330-9bd28f93bbde")
	Nov 14 14:56:29 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:29.066547    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf819737-d24e-4819-8330-9bd28f93bbde-minikube-ingress-dns-token-746nm" (OuterVolumeSpecName: "minikube-ingress-dns-token-746nm") pod "cf819737-d24e-4819-8330-9bd28f93bbde" (UID: "cf819737-d24e-4819-8330-9bd28f93bbde"). InnerVolumeSpecName "minikube-ingress-dns-token-746nm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 14:56:29 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:29.152086    1442 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-746nm" (UniqueName: "kubernetes.io/secret/cf819737-d24e-4819-8330-9bd28f93bbde-minikube-ingress-dns-token-746nm") on node "ingress-addon-legacy-944535" DevicePath ""
	Nov 14 14:56:29 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:29.253670    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3eef3a1a13bce5db6c493114f4a57a1e3dad2d357520f0b53d93c6c6ebfa759a
	Nov 14 14:56:29 ingress-addon-legacy-944535 kubelet[1442]: E1114 14:56:29.255445    1442 remote_runtime.go:295] ContainerStatus "3eef3a1a13bce5db6c493114f4a57a1e3dad2d357520f0b53d93c6c6ebfa759a" from runtime service failed: rpc error: code = NotFound desc = could not find container "3eef3a1a13bce5db6c493114f4a57a1e3dad2d357520f0b53d93c6c6ebfa759a": container with ID starting with 3eef3a1a13bce5db6c493114f4a57a1e3dad2d357520f0b53d93c6c6ebfa759a not found: ID does not exist
	Nov 14 14:56:29 ingress-addon-legacy-944535 kubelet[1442]: E1114 14:56:29.754535    1442 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"3eef3a1a13bce5db6c493114f4a57a1e3dad2d357520f0b53d93c6c6ebfa759a\": container with ID starting with 3eef3a1a13bce5db6c493114f4a57a1e3dad2d357520f0b53d93c6c6ebfa759a not found: ID does not exist"
	Nov 14 14:56:31 ingress-addon-legacy-944535 kubelet[1442]: E1114 14:56:31.123836    1442 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-chnd4.17978527fb1ef5d5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-chnd4", UID:"893e2e76-4005-4ef1-9977-41f2abeae790", APIVersion:"v1", ResourceVersion:"459", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-944535"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14d0087c7177fd5, ext:199944765537, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14d0087c7177fd5, ext:199944765537, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-chnd4.17978527fb1ef5d5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 14 14:56:31 ingress-addon-legacy-944535 kubelet[1442]: E1114 14:56:31.142831    1442 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-chnd4.17978527fb1ef5d5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-chnd4", UID:"893e2e76-4005-4ef1-9977-41f2abeae790", APIVersion:"v1", ResourceVersion:"459", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-944535"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14d0087c7177fd5, ext:199944765537, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14d0087c7e3fff9, ext:199958167683, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-chnd4.17978527fb1ef5d5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 14 14:56:34 ingress-addon-legacy-944535 kubelet[1442]: W1114 14:56:34.000855    1442 pod_container_deletor.go:77] Container "352fb3f2b9e333b3ef2e7a20bbefc67a3cb0ad03b86911e1a5ba45cf0ad5bca5" not found in pod's containers
	Nov 14 14:56:35 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:35.274278    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/893e2e76-4005-4ef1-9977-41f2abeae790-webhook-cert") pod "893e2e76-4005-4ef1-9977-41f2abeae790" (UID: "893e2e76-4005-4ef1-9977-41f2abeae790")
	Nov 14 14:56:35 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:35.274390    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-ppwln" (UniqueName: "kubernetes.io/secret/893e2e76-4005-4ef1-9977-41f2abeae790-ingress-nginx-token-ppwln") pod "893e2e76-4005-4ef1-9977-41f2abeae790" (UID: "893e2e76-4005-4ef1-9977-41f2abeae790")
	Nov 14 14:56:35 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:35.280990    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893e2e76-4005-4ef1-9977-41f2abeae790-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "893e2e76-4005-4ef1-9977-41f2abeae790" (UID: "893e2e76-4005-4ef1-9977-41f2abeae790"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 14:56:35 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:35.281051    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893e2e76-4005-4ef1-9977-41f2abeae790-ingress-nginx-token-ppwln" (OuterVolumeSpecName: "ingress-nginx-token-ppwln") pod "893e2e76-4005-4ef1-9977-41f2abeae790" (UID: "893e2e76-4005-4ef1-9977-41f2abeae790"). InnerVolumeSpecName "ingress-nginx-token-ppwln". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 14:56:35 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:35.374817    1442 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/893e2e76-4005-4ef1-9977-41f2abeae790-webhook-cert") on node "ingress-addon-legacy-944535" DevicePath ""
	Nov 14 14:56:35 ingress-addon-legacy-944535 kubelet[1442]: I1114 14:56:35.374894    1442 reconciler.go:319] Volume detached for volume "ingress-nginx-token-ppwln" (UniqueName: "kubernetes.io/secret/893e2e76-4005-4ef1-9977-41f2abeae790-ingress-nginx-token-ppwln") on node "ingress-addon-legacy-944535" DevicePath ""
	Nov 14 14:56:35 ingress-addon-legacy-944535 kubelet[1442]: W1114 14:56:35.757725    1442 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/893e2e76-4005-4ef1-9977-41f2abeae790/volumes" does not exist
	
	* 
	* ==> storage-provisioner [688702d3e69d87fb7b481182507ccab87927fc286c4851e071482f45b8ff5854] <==
	* I1114 14:53:28.626630       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 14:53:28.644222       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 14:53:28.644288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 14:53:28.651523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 14:53:28.652093       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-944535_5c5f974b-01c3-4880-8edc-f7aa9c7a1038!
	I1114 14:53:28.655435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ccc742c-19aa-4c55-bb1e-ae7b14f1bce8", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-944535_5c5f974b-01c3-4880-8edc-f7aa9c7a1038 became leader
	I1114 14:53:28.752892       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-944535_5c5f974b-01c3-4880-8edc-f7aa9c7a1038!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-944535 -n ingress-addon-legacy-944535
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-944535 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (166.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-nqqlc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-nqqlc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-nqqlc -- sh -c "ping -c 1 192.168.39.1": exit status 1 (192.209322ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-nqqlc): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-rxmbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-rxmbm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-rxmbm -- sh -c "ping -c 1 192.168.39.1": exit status 1 (191.424413ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-rxmbm): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-627820 -n multinode-627820
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-627820 logs -n 25: (1.331080373s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-286482 ssh -- ls                    | mount-start-2-286482 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:01 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-286482 ssh --                       | mount-start-2-286482 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:01 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-286482                           | mount-start-2-286482 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:01 UTC |
	| start   | -p mount-start-2-286482                           | mount-start-2-286482 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:01 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-286482 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC |                     |
	|         | --profile mount-start-2-286482                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-286482 ssh -- ls                    | mount-start-2-286482 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:01 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-286482 ssh --                       | mount-start-2-286482 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:01 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-286482                           | mount-start-2-286482 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:01 UTC |
	| delete  | -p mount-start-1-265134                           | mount-start-1-265134 | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:01 UTC |
	| start   | -p multinode-627820                               | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:01 UTC | 14 Nov 23 15:03 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- apply -f                   | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- rollout                    | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- get pods -o                | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- get pods -o                | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | busybox-5bc68d56bd-nqqlc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | busybox-5bc68d56bd-rxmbm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | busybox-5bc68d56bd-nqqlc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | busybox-5bc68d56bd-rxmbm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | busybox-5bc68d56bd-nqqlc -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | busybox-5bc68d56bd-rxmbm -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- get pods -o                | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | busybox-5bc68d56bd-nqqlc                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC |                     |
	|         | busybox-5bc68d56bd-nqqlc -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC | 14 Nov 23 15:03 UTC |
	|         | busybox-5bc68d56bd-rxmbm                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-627820 -- exec                       | multinode-627820     | jenkins | v1.32.0 | 14 Nov 23 15:03 UTC |                     |
	|         | busybox-5bc68d56bd-rxmbm -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:01:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:01:34.661866  844608 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:01:34.662021  844608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:01:34.662031  844608 out.go:309] Setting ErrFile to fd 2...
	I1114 15:01:34.662036  844608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:01:34.662231  844608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:01:34.662884  844608 out.go:303] Setting JSON to false
	I1114 15:01:34.663906  844608 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":42247,"bootTime":1699931848,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:01:34.663968  844608 start.go:138] virtualization: kvm guest
	I1114 15:01:34.666357  844608 out.go:177] * [multinode-627820] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:01:34.667851  844608 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:01:34.669288  844608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:01:34.667862  844608 notify.go:220] Checking for updates...
	I1114 15:01:34.670915  844608 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:01:34.672373  844608 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:01:34.673731  844608 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:01:34.675093  844608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:01:34.676645  844608 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:01:34.711432  844608 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 15:01:34.712879  844608 start.go:298] selected driver: kvm2
	I1114 15:01:34.712898  844608 start.go:902] validating driver "kvm2" against <nil>
	I1114 15:01:34.712919  844608 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:01:34.713633  844608 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:01:34.713741  844608 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:01:34.727938  844608 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:01:34.728021  844608 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 15:01:34.728244  844608 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:01:34.728300  844608 cni.go:84] Creating CNI manager for ""
	I1114 15:01:34.728312  844608 cni.go:136] 0 nodes found, recommending kindnet
	I1114 15:01:34.728321  844608 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 15:01:34.728339  844608 start_flags.go:323] config:
	{Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:01:34.728494  844608 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:01:34.730233  844608 out.go:177] * Starting control plane node multinode-627820 in cluster multinode-627820
	I1114 15:01:34.731428  844608 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:01:34.731466  844608 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:01:34.731476  844608 cache.go:56] Caching tarball of preloaded images
	I1114 15:01:34.731570  844608 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:01:34.731582  844608 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:01:34.731898  844608 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:01:34.731921  844608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json: {Name:mk46c2d2eefd867316ee851b9cfdb3991cfa3faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:01:34.732078  844608 start.go:365] acquiring machines lock for multinode-627820: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:01:34.732113  844608 start.go:369] acquired machines lock for "multinode-627820" in 18.568µs
	I1114 15:01:34.732136  844608 start.go:93] Provisioning new machine with config: &{Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:01:34.732202  844608 start.go:125] createHost starting for "" (driver="kvm2")
	I1114 15:01:34.733972  844608 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1114 15:01:34.734115  844608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:01:34.734163  844608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:01:34.747685  844608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I1114 15:01:34.748076  844608 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:01:34.748579  844608 main.go:141] libmachine: Using API Version  1
	I1114 15:01:34.748602  844608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:01:34.748969  844608 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:01:34.749168  844608 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:01:34.749282  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:01:34.749393  844608 start.go:159] libmachine.API.Create for "multinode-627820" (driver="kvm2")
	I1114 15:01:34.749418  844608 client.go:168] LocalClient.Create starting
	I1114 15:01:34.749449  844608 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 15:01:34.749486  844608 main.go:141] libmachine: Decoding PEM data...
	I1114 15:01:34.749504  844608 main.go:141] libmachine: Parsing certificate...
	I1114 15:01:34.749571  844608 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 15:01:34.749592  844608 main.go:141] libmachine: Decoding PEM data...
	I1114 15:01:34.749603  844608 main.go:141] libmachine: Parsing certificate...
	I1114 15:01:34.749624  844608 main.go:141] libmachine: Running pre-create checks...
	I1114 15:01:34.749636  844608 main.go:141] libmachine: (multinode-627820) Calling .PreCreateCheck
	I1114 15:01:34.749954  844608 main.go:141] libmachine: (multinode-627820) Calling .GetConfigRaw
	I1114 15:01:34.750344  844608 main.go:141] libmachine: Creating machine...
	I1114 15:01:34.750357  844608 main.go:141] libmachine: (multinode-627820) Calling .Create
	I1114 15:01:34.750462  844608 main.go:141] libmachine: (multinode-627820) Creating KVM machine...
	I1114 15:01:34.751836  844608 main.go:141] libmachine: (multinode-627820) DBG | found existing default KVM network
	I1114 15:01:34.752487  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:34.752348  844631 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1114 15:01:34.757441  844608 main.go:141] libmachine: (multinode-627820) DBG | trying to create private KVM network mk-multinode-627820 192.168.39.0/24...
	I1114 15:01:34.829798  844608 main.go:141] libmachine: (multinode-627820) DBG | private KVM network mk-multinode-627820 192.168.39.0/24 created
	I1114 15:01:34.829834  844608 main.go:141] libmachine: (multinode-627820) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820 ...
	I1114 15:01:34.829855  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:34.829696  844631 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:01:34.829871  844608 main.go:141] libmachine: (multinode-627820) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 15:01:34.829902  844608 main.go:141] libmachine: (multinode-627820) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 15:01:35.043201  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:35.043086  844631 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa...
	I1114 15:01:35.207152  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:35.207025  844631 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/multinode-627820.rawdisk...
	I1114 15:01:35.207187  844608 main.go:141] libmachine: (multinode-627820) DBG | Writing magic tar header
	I1114 15:01:35.207211  844608 main.go:141] libmachine: (multinode-627820) DBG | Writing SSH key tar header
	I1114 15:01:35.207231  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:35.207134  844631 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820 ...
	I1114 15:01:35.207249  844608 main.go:141] libmachine: (multinode-627820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820
	I1114 15:01:35.207257  844608 main.go:141] libmachine: (multinode-627820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 15:01:35.207279  844608 main.go:141] libmachine: (multinode-627820) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820 (perms=drwx------)
	I1114 15:01:35.207300  844608 main.go:141] libmachine: (multinode-627820) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 15:01:35.207313  844608 main.go:141] libmachine: (multinode-627820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:01:35.207326  844608 main.go:141] libmachine: (multinode-627820) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 15:01:35.207340  844608 main.go:141] libmachine: (multinode-627820) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 15:01:35.207350  844608 main.go:141] libmachine: (multinode-627820) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 15:01:35.207357  844608 main.go:141] libmachine: (multinode-627820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 15:01:35.207370  844608 main.go:141] libmachine: (multinode-627820) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 15:01:35.207379  844608 main.go:141] libmachine: (multinode-627820) DBG | Checking permissions on dir: /home/jenkins
	I1114 15:01:35.207389  844608 main.go:141] libmachine: (multinode-627820) DBG | Checking permissions on dir: /home
	I1114 15:01:35.207397  844608 main.go:141] libmachine: (multinode-627820) DBG | Skipping /home - not owner
	I1114 15:01:35.207413  844608 main.go:141] libmachine: (multinode-627820) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 15:01:35.207422  844608 main.go:141] libmachine: (multinode-627820) Creating domain...
	I1114 15:01:35.209733  844608 main.go:141] libmachine: (multinode-627820) define libvirt domain using xml: 
	I1114 15:01:35.209771  844608 main.go:141] libmachine: (multinode-627820) <domain type='kvm'>
	I1114 15:01:35.209780  844608 main.go:141] libmachine: (multinode-627820)   <name>multinode-627820</name>
	I1114 15:01:35.209786  844608 main.go:141] libmachine: (multinode-627820)   <memory unit='MiB'>2200</memory>
	I1114 15:01:35.209792  844608 main.go:141] libmachine: (multinode-627820)   <vcpu>2</vcpu>
	I1114 15:01:35.209800  844608 main.go:141] libmachine: (multinode-627820)   <features>
	I1114 15:01:35.209807  844608 main.go:141] libmachine: (multinode-627820)     <acpi/>
	I1114 15:01:35.209814  844608 main.go:141] libmachine: (multinode-627820)     <apic/>
	I1114 15:01:35.209821  844608 main.go:141] libmachine: (multinode-627820)     <pae/>
	I1114 15:01:35.209830  844608 main.go:141] libmachine: (multinode-627820)     
	I1114 15:01:35.209836  844608 main.go:141] libmachine: (multinode-627820)   </features>
	I1114 15:01:35.209844  844608 main.go:141] libmachine: (multinode-627820)   <cpu mode='host-passthrough'>
	I1114 15:01:35.209889  844608 main.go:141] libmachine: (multinode-627820)   
	I1114 15:01:35.209923  844608 main.go:141] libmachine: (multinode-627820)   </cpu>
	I1114 15:01:35.209939  844608 main.go:141] libmachine: (multinode-627820)   <os>
	I1114 15:01:35.209958  844608 main.go:141] libmachine: (multinode-627820)     <type>hvm</type>
	I1114 15:01:35.209973  844608 main.go:141] libmachine: (multinode-627820)     <boot dev='cdrom'/>
	I1114 15:01:35.209986  844608 main.go:141] libmachine: (multinode-627820)     <boot dev='hd'/>
	I1114 15:01:35.210000  844608 main.go:141] libmachine: (multinode-627820)     <bootmenu enable='no'/>
	I1114 15:01:35.210012  844608 main.go:141] libmachine: (multinode-627820)   </os>
	I1114 15:01:35.210050  844608 main.go:141] libmachine: (multinode-627820)   <devices>
	I1114 15:01:35.210079  844608 main.go:141] libmachine: (multinode-627820)     <disk type='file' device='cdrom'>
	I1114 15:01:35.210107  844608 main.go:141] libmachine: (multinode-627820)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/boot2docker.iso'/>
	I1114 15:01:35.210128  844608 main.go:141] libmachine: (multinode-627820)       <target dev='hdc' bus='scsi'/>
	I1114 15:01:35.210145  844608 main.go:141] libmachine: (multinode-627820)       <readonly/>
	I1114 15:01:35.210163  844608 main.go:141] libmachine: (multinode-627820)     </disk>
	I1114 15:01:35.210182  844608 main.go:141] libmachine: (multinode-627820)     <disk type='file' device='disk'>
	I1114 15:01:35.210198  844608 main.go:141] libmachine: (multinode-627820)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 15:01:35.210233  844608 main.go:141] libmachine: (multinode-627820)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/multinode-627820.rawdisk'/>
	I1114 15:01:35.210252  844608 main.go:141] libmachine: (multinode-627820)       <target dev='hda' bus='virtio'/>
	I1114 15:01:35.210265  844608 main.go:141] libmachine: (multinode-627820)     </disk>
	I1114 15:01:35.210279  844608 main.go:141] libmachine: (multinode-627820)     <interface type='network'>
	I1114 15:01:35.210294  844608 main.go:141] libmachine: (multinode-627820)       <source network='mk-multinode-627820'/>
	I1114 15:01:35.210306  844608 main.go:141] libmachine: (multinode-627820)       <model type='virtio'/>
	I1114 15:01:35.210320  844608 main.go:141] libmachine: (multinode-627820)     </interface>
	I1114 15:01:35.210337  844608 main.go:141] libmachine: (multinode-627820)     <interface type='network'>
	I1114 15:01:35.210352  844608 main.go:141] libmachine: (multinode-627820)       <source network='default'/>
	I1114 15:01:35.210365  844608 main.go:141] libmachine: (multinode-627820)       <model type='virtio'/>
	I1114 15:01:35.210379  844608 main.go:141] libmachine: (multinode-627820)     </interface>
	I1114 15:01:35.210391  844608 main.go:141] libmachine: (multinode-627820)     <serial type='pty'>
	I1114 15:01:35.210413  844608 main.go:141] libmachine: (multinode-627820)       <target port='0'/>
	I1114 15:01:35.210431  844608 main.go:141] libmachine: (multinode-627820)     </serial>
	I1114 15:01:35.210450  844608 main.go:141] libmachine: (multinode-627820)     <console type='pty'>
	I1114 15:01:35.210468  844608 main.go:141] libmachine: (multinode-627820)       <target type='serial' port='0'/>
	I1114 15:01:35.210482  844608 main.go:141] libmachine: (multinode-627820)     </console>
	I1114 15:01:35.210494  844608 main.go:141] libmachine: (multinode-627820)     <rng model='virtio'>
	I1114 15:01:35.210505  844608 main.go:141] libmachine: (multinode-627820)       <backend model='random'>/dev/random</backend>
	I1114 15:01:35.210520  844608 main.go:141] libmachine: (multinode-627820)     </rng>
	I1114 15:01:35.210528  844608 main.go:141] libmachine: (multinode-627820)     
	I1114 15:01:35.210539  844608 main.go:141] libmachine: (multinode-627820)     
	I1114 15:01:35.210546  844608 main.go:141] libmachine: (multinode-627820)   </devices>
	I1114 15:01:35.210553  844608 main.go:141] libmachine: (multinode-627820) </domain>
	I1114 15:01:35.210561  844608 main.go:141] libmachine: (multinode-627820) 
	I1114 15:01:35.214278  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:35:a7:bb in network default
	I1114 15:01:35.214920  844608 main.go:141] libmachine: (multinode-627820) Ensuring networks are active...
	I1114 15:01:35.214949  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:35.215643  844608 main.go:141] libmachine: (multinode-627820) Ensuring network default is active
	I1114 15:01:35.216026  844608 main.go:141] libmachine: (multinode-627820) Ensuring network mk-multinode-627820 is active
	I1114 15:01:35.216541  844608 main.go:141] libmachine: (multinode-627820) Getting domain xml...
	I1114 15:01:35.217161  844608 main.go:141] libmachine: (multinode-627820) Creating domain...
	I1114 15:01:36.430356  844608 main.go:141] libmachine: (multinode-627820) Waiting to get IP...
	I1114 15:01:36.431188  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:36.431730  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:36.431758  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:36.431714  844631 retry.go:31] will retry after 203.915107ms: waiting for machine to come up
	I1114 15:01:36.637270  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:36.637783  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:36.637816  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:36.637730  844631 retry.go:31] will retry after 251.87558ms: waiting for machine to come up
	I1114 15:01:36.891412  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:36.891850  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:36.891873  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:36.891813  844631 retry.go:31] will retry after 301.105846ms: waiting for machine to come up
	I1114 15:01:37.194590  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:37.195070  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:37.195097  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:37.194971  844631 retry.go:31] will retry after 470.883239ms: waiting for machine to come up
	I1114 15:01:37.667697  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:37.668157  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:37.668212  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:37.668122  844631 retry.go:31] will retry after 686.835351ms: waiting for machine to come up
	I1114 15:01:38.357305  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:38.357801  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:38.357837  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:38.357737  844631 retry.go:31] will retry after 646.331929ms: waiting for machine to come up
	I1114 15:01:39.005612  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:39.006085  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:39.006112  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:39.006048  844631 retry.go:31] will retry after 1.167309841s: waiting for machine to come up
	I1114 15:01:40.175269  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:40.175743  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:40.175779  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:40.175701  844631 retry.go:31] will retry after 1.35960572s: waiting for machine to come up
	I1114 15:01:41.537429  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:41.537847  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:41.537881  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:41.537787  844631 retry.go:31] will retry after 1.450737307s: waiting for machine to come up
	I1114 15:01:42.990519  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:42.991032  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:42.991067  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:42.990948  844631 retry.go:31] will retry after 1.96683731s: waiting for machine to come up
	I1114 15:01:44.959071  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:44.959579  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:44.959638  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:44.959546  844631 retry.go:31] will retry after 1.914892003s: waiting for machine to come up
	I1114 15:01:46.875902  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:46.876452  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:46.876486  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:46.876391  844631 retry.go:31] will retry after 3.585487152s: waiting for machine to come up
	I1114 15:01:50.463652  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:50.464037  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:50.464067  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:50.463977  844631 retry.go:31] will retry after 3.494494316s: waiting for machine to come up
	I1114 15:01:53.962805  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:53.963234  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:01:53.963263  844608 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:01:53.963192  844631 retry.go:31] will retry after 3.686911695s: waiting for machine to come up
	I1114 15:01:57.653855  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:57.654337  844608 main.go:141] libmachine: (multinode-627820) Found IP for machine: 192.168.39.63
	I1114 15:01:57.654361  844608 main.go:141] libmachine: (multinode-627820) Reserving static IP address...
	I1114 15:01:57.654371  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has current primary IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:57.654735  844608 main.go:141] libmachine: (multinode-627820) DBG | unable to find host DHCP lease matching {name: "multinode-627820", mac: "52:54:00:c4:37:2e", ip: "192.168.39.63"} in network mk-multinode-627820
	I1114 15:01:57.730105  844608 main.go:141] libmachine: (multinode-627820) DBG | Getting to WaitForSSH function...
	I1114 15:01:57.730149  844608 main.go:141] libmachine: (multinode-627820) Reserved static IP address: 192.168.39.63
	I1114 15:01:57.730164  844608 main.go:141] libmachine: (multinode-627820) Waiting for SSH to be available...
	I1114 15:01:57.732499  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:57.732890  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:57.732997  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:57.733018  844608 main.go:141] libmachine: (multinode-627820) DBG | Using SSH client type: external
	I1114 15:01:57.733046  844608 main.go:141] libmachine: (multinode-627820) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa (-rw-------)
	I1114 15:01:57.733141  844608 main.go:141] libmachine: (multinode-627820) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:01:57.733171  844608 main.go:141] libmachine: (multinode-627820) DBG | About to run SSH command:
	I1114 15:01:57.733193  844608 main.go:141] libmachine: (multinode-627820) DBG | exit 0
	I1114 15:01:57.824535  844608 main.go:141] libmachine: (multinode-627820) DBG | SSH cmd err, output: <nil>: 
	I1114 15:01:57.824818  844608 main.go:141] libmachine: (multinode-627820) KVM machine creation complete!
	I1114 15:01:57.825221  844608 main.go:141] libmachine: (multinode-627820) Calling .GetConfigRaw
	I1114 15:01:57.825832  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:01:57.826056  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:01:57.826248  844608 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 15:01:57.826267  844608 main.go:141] libmachine: (multinode-627820) Calling .GetState
	I1114 15:01:57.827719  844608 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 15:01:57.827735  844608 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 15:01:57.827742  844608 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 15:01:57.827749  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:57.830190  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:57.830636  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:57.830666  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:57.830757  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:57.830964  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:57.831140  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:57.831294  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:57.831470  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:01:57.831837  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:01:57.831850  844608 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 15:01:57.955836  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:01:57.955862  844608 main.go:141] libmachine: Detecting the provisioner...
	I1114 15:01:57.955870  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:57.958777  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:57.959129  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:57.959158  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:57.959331  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:57.959566  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:57.959813  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:57.959979  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:57.960190  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:01:57.960529  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:01:57.960540  844608 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 15:01:58.085816  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 15:01:58.085914  844608 main.go:141] libmachine: found compatible host: buildroot
	I1114 15:01:58.085940  844608 main.go:141] libmachine: Provisioning with buildroot...
	I1114 15:01:58.085958  844608 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:01:58.086271  844608 buildroot.go:166] provisioning hostname "multinode-627820"
	I1114 15:01:58.086300  844608 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:01:58.086556  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:58.089315  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.089636  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:58.089661  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.089801  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:58.090019  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:58.090166  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:58.090287  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:58.090432  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:01:58.090785  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:01:58.090799  844608 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-627820 && echo "multinode-627820" | sudo tee /etc/hostname
	I1114 15:01:58.225285  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-627820
	
	I1114 15:01:58.225327  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:58.228185  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.228553  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:58.228587  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.228767  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:58.228948  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:58.229108  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:58.229284  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:58.229495  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:01:58.229893  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:01:58.229911  844608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-627820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-627820/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-627820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:01:58.361441  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:01:58.361507  844608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:01:58.361554  844608 buildroot.go:174] setting up certificates
	I1114 15:01:58.361576  844608 provision.go:83] configureAuth start
	I1114 15:01:58.361612  844608 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:01:58.361995  844608 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:01:58.364774  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.365038  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:58.365065  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.365173  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:58.367403  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.367936  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:58.367963  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.368134  844608 provision.go:138] copyHostCerts
	I1114 15:01:58.368165  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:01:58.368193  844608 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:01:58.368212  844608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:01:58.368286  844608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:01:58.368413  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:01:58.368441  844608 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:01:58.368451  844608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:01:58.368488  844608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:01:58.368547  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:01:58.368567  844608 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:01:58.368574  844608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:01:58.368595  844608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:01:58.368659  844608 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.multinode-627820 san=[192.168.39.63 192.168.39.63 localhost 127.0.0.1 minikube multinode-627820]
	I1114 15:01:58.631161  844608 provision.go:172] copyRemoteCerts
	I1114 15:01:58.631223  844608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:01:58.631250  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:58.634058  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.634495  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:58.634531  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.634651  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:58.634910  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:58.635101  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:58.635277  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:01:58.725287  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 15:01:58.725387  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:01:58.748136  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 15:01:58.748198  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1114 15:01:58.769969  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 15:01:58.770043  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:01:58.791353  844608 provision.go:86] duration metric: configureAuth took 429.760241ms
	I1114 15:01:58.791381  844608 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:01:58.791547  844608 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:01:58.791620  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:58.794633  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.794995  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:58.795030  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:58.795237  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:58.795441  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:58.795612  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:58.795770  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:58.795906  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:01:58.796254  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:01:58.796270  844608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:01:59.120444  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:01:59.120489  844608 main.go:141] libmachine: Checking connection to Docker...
	I1114 15:01:59.120500  844608 main.go:141] libmachine: (multinode-627820) Calling .GetURL
	I1114 15:01:59.122019  844608 main.go:141] libmachine: (multinode-627820) DBG | Using libvirt version 6000000
	I1114 15:01:59.124656  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.125064  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:59.125105  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.125300  844608 main.go:141] libmachine: Docker is up and running!
	I1114 15:01:59.125320  844608 main.go:141] libmachine: Reticulating splines...
	I1114 15:01:59.125328  844608 client.go:171] LocalClient.Create took 24.375898982s
	I1114 15:01:59.125356  844608 start.go:167] duration metric: libmachine.API.Create for "multinode-627820" took 24.375962075s
	I1114 15:01:59.125392  844608 start.go:300] post-start starting for "multinode-627820" (driver="kvm2")
	I1114 15:01:59.125410  844608 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:01:59.125436  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:01:59.125702  844608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:01:59.125725  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:59.128196  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.128582  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:59.128610  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.128695  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:59.128895  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:59.129129  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:59.129310  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:01:59.219306  844608 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:01:59.223289  844608 command_runner.go:130] > NAME=Buildroot
	I1114 15:01:59.223311  844608 command_runner.go:130] > VERSION=2021.02.12-1-g9cb9327-dirty
	I1114 15:01:59.223316  844608 command_runner.go:130] > ID=buildroot
	I1114 15:01:59.223320  844608 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 15:01:59.223325  844608 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 15:01:59.223353  844608 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:01:59.223364  844608 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:01:59.223421  844608 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:01:59.223501  844608 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:01:59.223514  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /etc/ssl/certs/8322112.pem
	I1114 15:01:59.223597  844608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:01:59.232565  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:01:59.254194  844608 start.go:303] post-start completed in 128.787196ms
	I1114 15:01:59.254249  844608 main.go:141] libmachine: (multinode-627820) Calling .GetConfigRaw
	I1114 15:01:59.254877  844608 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:01:59.257280  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.257625  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:59.257737  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.257932  844608 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:01:59.258106  844608 start.go:128] duration metric: createHost completed in 24.525894678s
	I1114 15:01:59.258127  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:59.260436  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.260846  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:59.260866  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.261064  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:59.261278  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:59.261434  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:59.261589  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:59.261785  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:01:59.262106  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:01:59.262117  844608 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:01:59.385808  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699974119.355707280
	
	I1114 15:01:59.385836  844608 fix.go:206] guest clock: 1699974119.355707280
	I1114 15:01:59.385843  844608 fix.go:219] Guest: 2023-11-14 15:01:59.35570728 +0000 UTC Remote: 2023-11-14 15:01:59.258116917 +0000 UTC m=+24.645595448 (delta=97.590363ms)
	I1114 15:01:59.385868  844608 fix.go:190] guest clock delta is within tolerance: 97.590363ms
	I1114 15:01:59.385873  844608 start.go:83] releasing machines lock for "multinode-627820", held for 24.653750113s
	I1114 15:01:59.385894  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:01:59.386223  844608 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:01:59.388993  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.389528  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:59.389561  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.389684  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:01:59.390253  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:01:59.390433  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:01:59.390546  844608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:01:59.390609  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:59.390723  844608 ssh_runner.go:195] Run: cat /version.json
	I1114 15:01:59.390755  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:01:59.393235  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.393500  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.393588  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:59.393623  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.393794  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:59.393919  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:01:59.393926  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:59.393948  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:01:59.394038  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:59.394100  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:01:59.394255  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:01:59.394295  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:01:59.394372  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:01:59.394517  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:01:59.481281  844608 command_runner.go:130] > {"iso_version": "v1.32.1-1699485311-17565", "kicbase_version": "v0.0.42", "minikube_version": "v1.32.0", "commit": "ac8620e02dd92b447e2556d107d7751e3faf21d2"}
	I1114 15:01:59.481863  844608 ssh_runner.go:195] Run: systemctl --version
	I1114 15:01:59.501492  844608 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 15:01:59.501571  844608 command_runner.go:130] > systemd 247 (247)
	I1114 15:01:59.501595  844608 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1114 15:01:59.501662  844608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:01:59.662977  844608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 15:01:59.670497  844608 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 15:01:59.670618  844608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:01:59.670700  844608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:01:59.686700  844608 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1114 15:01:59.686728  844608 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:01:59.686736  844608 start.go:472] detecting cgroup driver to use...
	I1114 15:01:59.686816  844608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:01:59.701685  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:01:59.714334  844608 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:01:59.714397  844608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:01:59.727696  844608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:01:59.740815  844608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:01:59.847166  844608 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1114 15:01:59.847249  844608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:01:59.965960  844608 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1114 15:01:59.965997  844608 docker.go:219] disabling docker service ...
	I1114 15:01:59.966045  844608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:01:59.979593  844608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:01:59.991631  844608 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1114 15:01:59.991713  844608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:02:00.100649  844608 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1114 15:02:00.100732  844608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:02:00.112655  844608 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1114 15:02:00.112688  844608 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1114 15:02:00.202361  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:02:00.214485  844608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:02:00.230929  844608 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1114 15:02:00.231354  844608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:02:00.231413  844608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:02:00.240039  844608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:02:00.240092  844608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:02:00.249082  844608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:02:00.257657  844608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:02:00.266079  844608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:02:00.274942  844608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:02:00.282495  844608 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:02:00.282535  844608 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:02:00.282570  844608 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:02:00.294388  844608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:02:00.303218  844608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:02:00.408606  844608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:02:00.579836  844608 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:02:00.579930  844608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:02:00.584699  844608 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1114 15:02:00.584731  844608 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 15:02:00.584773  844608 command_runner.go:130] > Device: 16h/22d	Inode: 767         Links: 1
	I1114 15:02:00.584784  844608 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:02:00.584792  844608 command_runner.go:130] > Access: 2023-11-14 15:02:00.535430529 +0000
	I1114 15:02:00.584802  844608 command_runner.go:130] > Modify: 2023-11-14 15:02:00.535430529 +0000
	I1114 15:02:00.584810  844608 command_runner.go:130] > Change: 2023-11-14 15:02:00.535430529 +0000
	I1114 15:02:00.584817  844608 command_runner.go:130] >  Birth: -
	I1114 15:02:00.584950  844608 start.go:540] Will wait 60s for crictl version
	I1114 15:02:00.585013  844608 ssh_runner.go:195] Run: which crictl
	I1114 15:02:00.588791  844608 command_runner.go:130] > /usr/bin/crictl
	I1114 15:02:00.588859  844608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:02:00.623798  844608 command_runner.go:130] > Version:  0.1.0
	I1114 15:02:00.623824  844608 command_runner.go:130] > RuntimeName:  cri-o
	I1114 15:02:00.623831  844608 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1114 15:02:00.623843  844608 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 15:02:00.623865  844608 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:02:00.623965  844608 ssh_runner.go:195] Run: crio --version
	I1114 15:02:00.668956  844608 command_runner.go:130] > crio version 1.24.1
	I1114 15:02:00.668986  844608 command_runner.go:130] > Version:          1.24.1
	I1114 15:02:00.668998  844608 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:02:00.669005  844608 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:02:00.669015  844608 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:02:00.669022  844608 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:02:00.669028  844608 command_runner.go:130] > Compiler:         gc
	I1114 15:02:00.669035  844608 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:02:00.669049  844608 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:02:00.669066  844608 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:02:00.669072  844608 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:02:00.669079  844608 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:02:00.670434  844608 ssh_runner.go:195] Run: crio --version
	I1114 15:02:00.710102  844608 command_runner.go:130] > crio version 1.24.1
	I1114 15:02:00.710132  844608 command_runner.go:130] > Version:          1.24.1
	I1114 15:02:00.710162  844608 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:02:00.710170  844608 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:02:00.710179  844608 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:02:00.710187  844608 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:02:00.710193  844608 command_runner.go:130] > Compiler:         gc
	I1114 15:02:00.710201  844608 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:02:00.710212  844608 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:02:00.710225  844608 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:02:00.710241  844608 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:02:00.710248  844608 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:02:00.713523  844608 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:02:00.714833  844608 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:02:00.717280  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:02:00.717623  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:02:00.717653  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:02:00.717811  844608 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:02:00.721770  844608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:02:00.733581  844608 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:02:00.733640  844608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:02:00.765641  844608 command_runner.go:130] > {
	I1114 15:02:00.765670  844608 command_runner.go:130] >   "images": [
	I1114 15:02:00.765677  844608 command_runner.go:130] >   ]
	I1114 15:02:00.765682  844608 command_runner.go:130] > }
	I1114 15:02:00.765825  844608 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:02:00.765888  844608 ssh_runner.go:195] Run: which lz4
	I1114 15:02:00.769701  844608 command_runner.go:130] > /usr/bin/lz4
	I1114 15:02:00.769905  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1114 15:02:00.770005  844608 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:02:00.773915  844608 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:02:00.774101  844608 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:02:00.774131  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:02:02.610094  844608 crio.go:444] Took 1.840118 seconds to copy over tarball
	I1114 15:02:02.610169  844608 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:02:05.516248  844608 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.906046225s)
	I1114 15:02:05.516294  844608 crio.go:451] Took 2.906173 seconds to extract the tarball
	I1114 15:02:05.516305  844608 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:02:05.557585  844608 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:02:05.633688  844608 command_runner.go:130] > {
	I1114 15:02:05.633720  844608 command_runner.go:130] >   "images": [
	I1114 15:02:05.633727  844608 command_runner.go:130] >     {
	I1114 15:02:05.633750  844608 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1114 15:02:05.633757  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.633766  844608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1114 15:02:05.633772  844608 command_runner.go:130] >       ],
	I1114 15:02:05.633779  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.633793  844608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1114 15:02:05.633807  844608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1114 15:02:05.633824  844608 command_runner.go:130] >       ],
	I1114 15:02:05.633832  844608 command_runner.go:130] >       "size": "65258016",
	I1114 15:02:05.633837  844608 command_runner.go:130] >       "uid": null,
	I1114 15:02:05.633844  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.633849  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.633853  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.633860  844608 command_runner.go:130] >     },
	I1114 15:02:05.633864  844608 command_runner.go:130] >     {
	I1114 15:02:05.633869  844608 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1114 15:02:05.633874  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.633881  844608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1114 15:02:05.633885  844608 command_runner.go:130] >       ],
	I1114 15:02:05.633889  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.633898  844608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1114 15:02:05.633908  844608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1114 15:02:05.633913  844608 command_runner.go:130] >       ],
	I1114 15:02:05.633927  844608 command_runner.go:130] >       "size": "31470524",
	I1114 15:02:05.633931  844608 command_runner.go:130] >       "uid": null,
	I1114 15:02:05.633937  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.633944  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.633948  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.633952  844608 command_runner.go:130] >     },
	I1114 15:02:05.633955  844608 command_runner.go:130] >     {
	I1114 15:02:05.633963  844608 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1114 15:02:05.633967  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.633989  844608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1114 15:02:05.633995  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634000  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.634007  844608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1114 15:02:05.634017  844608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1114 15:02:05.634021  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634026  844608 command_runner.go:130] >       "size": "53621675",
	I1114 15:02:05.634030  844608 command_runner.go:130] >       "uid": null,
	I1114 15:02:05.634035  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.634039  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.634043  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.634050  844608 command_runner.go:130] >     },
	I1114 15:02:05.634055  844608 command_runner.go:130] >     {
	I1114 15:02:05.634061  844608 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1114 15:02:05.634065  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.634071  844608 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1114 15:02:05.634077  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634081  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.634089  844608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1114 15:02:05.634101  844608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1114 15:02:05.634116  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634127  844608 command_runner.go:130] >       "size": "295456551",
	I1114 15:02:05.634133  844608 command_runner.go:130] >       "uid": {
	I1114 15:02:05.634140  844608 command_runner.go:130] >         "value": "0"
	I1114 15:02:05.634149  844608 command_runner.go:130] >       },
	I1114 15:02:05.634156  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.634163  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.634177  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.634182  844608 command_runner.go:130] >     },
	I1114 15:02:05.634188  844608 command_runner.go:130] >     {
	I1114 15:02:05.634198  844608 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1114 15:02:05.634206  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.634214  844608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1114 15:02:05.634218  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634222  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.634229  844608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1114 15:02:05.634237  844608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1114 15:02:05.634266  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634285  844608 command_runner.go:130] >       "size": "127165392",
	I1114 15:02:05.634289  844608 command_runner.go:130] >       "uid": {
	I1114 15:02:05.634293  844608 command_runner.go:130] >         "value": "0"
	I1114 15:02:05.634297  844608 command_runner.go:130] >       },
	I1114 15:02:05.634301  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.634305  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.634310  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.634316  844608 command_runner.go:130] >     },
	I1114 15:02:05.634320  844608 command_runner.go:130] >     {
	I1114 15:02:05.634331  844608 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1114 15:02:05.634338  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.634343  844608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1114 15:02:05.634347  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634352  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.634359  844608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1114 15:02:05.634369  844608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1114 15:02:05.634373  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634377  844608 command_runner.go:130] >       "size": "123188534",
	I1114 15:02:05.634381  844608 command_runner.go:130] >       "uid": {
	I1114 15:02:05.634385  844608 command_runner.go:130] >         "value": "0"
	I1114 15:02:05.634389  844608 command_runner.go:130] >       },
	I1114 15:02:05.634394  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.634400  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.634404  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.634409  844608 command_runner.go:130] >     },
	I1114 15:02:05.634413  844608 command_runner.go:130] >     {
	I1114 15:02:05.634419  844608 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1114 15:02:05.634426  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.634434  844608 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1114 15:02:05.634440  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634444  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.634454  844608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1114 15:02:05.634463  844608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1114 15:02:05.634467  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634476  844608 command_runner.go:130] >       "size": "74691991",
	I1114 15:02:05.634482  844608 command_runner.go:130] >       "uid": null,
	I1114 15:02:05.634493  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.634498  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.634502  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.634506  844608 command_runner.go:130] >     },
	I1114 15:02:05.634509  844608 command_runner.go:130] >     {
	I1114 15:02:05.634515  844608 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1114 15:02:05.634522  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.634527  844608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1114 15:02:05.634531  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634541  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.634626  844608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1114 15:02:05.634639  844608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1114 15:02:05.634644  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634651  844608 command_runner.go:130] >       "size": "61498678",
	I1114 15:02:05.634661  844608 command_runner.go:130] >       "uid": {
	I1114 15:02:05.634667  844608 command_runner.go:130] >         "value": "0"
	I1114 15:02:05.634676  844608 command_runner.go:130] >       },
	I1114 15:02:05.634681  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.634686  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.634690  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.634694  844608 command_runner.go:130] >     },
	I1114 15:02:05.634697  844608 command_runner.go:130] >     {
	I1114 15:02:05.634703  844608 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1114 15:02:05.634710  844608 command_runner.go:130] >       "repoTags": [
	I1114 15:02:05.634715  844608 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1114 15:02:05.634719  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634723  844608 command_runner.go:130] >       "repoDigests": [
	I1114 15:02:05.634735  844608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1114 15:02:05.634742  844608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1114 15:02:05.634746  844608 command_runner.go:130] >       ],
	I1114 15:02:05.634750  844608 command_runner.go:130] >       "size": "750414",
	I1114 15:02:05.634754  844608 command_runner.go:130] >       "uid": {
	I1114 15:02:05.634759  844608 command_runner.go:130] >         "value": "65535"
	I1114 15:02:05.634764  844608 command_runner.go:130] >       },
	I1114 15:02:05.634769  844608 command_runner.go:130] >       "username": "",
	I1114 15:02:05.634773  844608 command_runner.go:130] >       "spec": null,
	I1114 15:02:05.634780  844608 command_runner.go:130] >       "pinned": false
	I1114 15:02:05.634783  844608 command_runner.go:130] >     }
	I1114 15:02:05.634787  844608 command_runner.go:130] >   ]
	I1114 15:02:05.634793  844608 command_runner.go:130] > }
	I1114 15:02:05.635222  844608 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:02:05.635241  844608 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:02:05.635304  844608 ssh_runner.go:195] Run: crio config
	I1114 15:02:05.687719  844608 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1114 15:02:05.687788  844608 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1114 15:02:05.687800  844608 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1114 15:02:05.687806  844608 command_runner.go:130] > #
	I1114 15:02:05.687817  844608 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1114 15:02:05.687832  844608 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1114 15:02:05.687844  844608 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1114 15:02:05.687864  844608 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1114 15:02:05.687876  844608 command_runner.go:130] > # reload'.
	I1114 15:02:05.687885  844608 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1114 15:02:05.687895  844608 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1114 15:02:05.687906  844608 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1114 15:02:05.687919  844608 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1114 15:02:05.687925  844608 command_runner.go:130] > [crio]
	I1114 15:02:05.687934  844608 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1114 15:02:05.687948  844608 command_runner.go:130] > # containers images, in this directory.
	I1114 15:02:05.687957  844608 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1114 15:02:05.687976  844608 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1114 15:02:05.687989  844608 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1114 15:02:05.688004  844608 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1114 15:02:05.688017  844608 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1114 15:02:05.688025  844608 command_runner.go:130] > storage_driver = "overlay"
	I1114 15:02:05.688038  844608 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1114 15:02:05.688053  844608 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1114 15:02:05.688069  844608 command_runner.go:130] > storage_option = [
	I1114 15:02:05.688085  844608 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1114 15:02:05.688095  844608 command_runner.go:130] > ]
	I1114 15:02:05.688107  844608 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1114 15:02:05.688121  844608 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1114 15:02:05.688163  844608 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1114 15:02:05.688183  844608 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1114 15:02:05.688192  844608 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1114 15:02:05.688199  844608 command_runner.go:130] > # always happen on a node reboot
	I1114 15:02:05.688212  844608 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1114 15:02:05.688223  844608 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1114 15:02:05.688236  844608 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1114 15:02:05.688262  844608 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1114 15:02:05.688281  844608 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1114 15:02:05.688299  844608 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1114 15:02:05.688316  844608 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1114 15:02:05.688327  844608 command_runner.go:130] > # internal_wipe = true
	I1114 15:02:05.688337  844608 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1114 15:02:05.688352  844608 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1114 15:02:05.688365  844608 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1114 15:02:05.688375  844608 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1114 15:02:05.688387  844608 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1114 15:02:05.688397  844608 command_runner.go:130] > [crio.api]
	I1114 15:02:05.688406  844608 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1114 15:02:05.688416  844608 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1114 15:02:05.688425  844608 command_runner.go:130] > # IP address on which the stream server will listen.
	I1114 15:02:05.688435  844608 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1114 15:02:05.688451  844608 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1114 15:02:05.688463  844608 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1114 15:02:05.688470  844608 command_runner.go:130] > # stream_port = "0"
	I1114 15:02:05.688481  844608 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1114 15:02:05.688493  844608 command_runner.go:130] > # stream_enable_tls = false
	I1114 15:02:05.688505  844608 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1114 15:02:05.688512  844608 command_runner.go:130] > # stream_idle_timeout = ""
	I1114 15:02:05.688522  844608 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1114 15:02:05.688535  844608 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1114 15:02:05.688544  844608 command_runner.go:130] > # minutes.
	I1114 15:02:05.688551  844608 command_runner.go:130] > # stream_tls_cert = ""
	I1114 15:02:05.688563  844608 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1114 15:02:05.688576  844608 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1114 15:02:05.688586  844608 command_runner.go:130] > # stream_tls_key = ""
	I1114 15:02:05.688595  844608 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1114 15:02:05.688609  844608 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1114 15:02:05.688623  844608 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1114 15:02:05.688634  844608 command_runner.go:130] > # stream_tls_ca = ""
	I1114 15:02:05.688648  844608 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:02:05.688659  844608 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1114 15:02:05.688672  844608 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:02:05.688683  844608 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1114 15:02:05.688833  844608 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1114 15:02:05.688856  844608 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1114 15:02:05.688863  844608 command_runner.go:130] > [crio.runtime]
	I1114 15:02:05.688872  844608 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1114 15:02:05.688881  844608 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1114 15:02:05.688890  844608 command_runner.go:130] > # "nofile=1024:2048"
	I1114 15:02:05.688901  844608 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1114 15:02:05.688911  844608 command_runner.go:130] > # default_ulimits = [
	I1114 15:02:05.688918  844608 command_runner.go:130] > # ]
	I1114 15:02:05.688929  844608 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1114 15:02:05.688939  844608 command_runner.go:130] > # no_pivot = false
	I1114 15:02:05.688950  844608 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1114 15:02:05.688964  844608 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1114 15:02:05.688977  844608 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1114 15:02:05.688994  844608 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1114 15:02:05.689006  844608 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1114 15:02:05.689017  844608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:02:05.689030  844608 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1114 15:02:05.689045  844608 command_runner.go:130] > # Cgroup setting for conmon
	I1114 15:02:05.689060  844608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1114 15:02:05.689078  844608 command_runner.go:130] > conmon_cgroup = "pod"
	I1114 15:02:05.689089  844608 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1114 15:02:05.689102  844608 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1114 15:02:05.689116  844608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:02:05.689125  844608 command_runner.go:130] > conmon_env = [
	I1114 15:02:05.689135  844608 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1114 15:02:05.689143  844608 command_runner.go:130] > ]
	I1114 15:02:05.689152  844608 command_runner.go:130] > # Additional environment variables to set for all the
	I1114 15:02:05.689164  844608 command_runner.go:130] > # containers. These are overridden if set in the
	I1114 15:02:05.689177  844608 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1114 15:02:05.689188  844608 command_runner.go:130] > # default_env = [
	I1114 15:02:05.689194  844608 command_runner.go:130] > # ]
	I1114 15:02:05.689208  844608 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1114 15:02:05.689217  844608 command_runner.go:130] > # selinux = false
	I1114 15:02:05.689227  844608 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1114 15:02:05.689240  844608 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1114 15:02:05.689255  844608 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1114 15:02:05.689267  844608 command_runner.go:130] > # seccomp_profile = ""
	I1114 15:02:05.689277  844608 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1114 15:02:05.689291  844608 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1114 15:02:05.689306  844608 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1114 15:02:05.689316  844608 command_runner.go:130] > # which might increase security.
	I1114 15:02:05.689325  844608 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1114 15:02:05.689340  844608 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1114 15:02:05.689357  844608 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1114 15:02:05.689369  844608 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1114 15:02:05.689381  844608 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1114 15:02:05.689392  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:02:05.689403  844608 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1114 15:02:05.689412  844608 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1114 15:02:05.689423  844608 command_runner.go:130] > # the cgroup blockio controller.
	I1114 15:02:05.689429  844608 command_runner.go:130] > # blockio_config_file = ""
	I1114 15:02:05.689442  844608 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1114 15:02:05.689452  844608 command_runner.go:130] > # irqbalance daemon.
	I1114 15:02:05.689469  844608 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1114 15:02:05.689482  844608 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1114 15:02:05.689493  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:02:05.689501  844608 command_runner.go:130] > # rdt_config_file = ""
	I1114 15:02:05.689513  844608 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1114 15:02:05.689525  844608 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1114 15:02:05.689535  844608 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1114 15:02:05.689542  844608 command_runner.go:130] > # separate_pull_cgroup = ""
	I1114 15:02:05.689553  844608 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1114 15:02:05.689565  844608 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1114 15:02:05.689574  844608 command_runner.go:130] > # will be added.
	I1114 15:02:05.689582  844608 command_runner.go:130] > # default_capabilities = [
	I1114 15:02:05.689592  844608 command_runner.go:130] > # 	"CHOWN",
	I1114 15:02:05.689600  844608 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1114 15:02:05.689608  844608 command_runner.go:130] > # 	"FSETID",
	I1114 15:02:05.689619  844608 command_runner.go:130] > # 	"FOWNER",
	I1114 15:02:05.689627  844608 command_runner.go:130] > # 	"SETGID",
	I1114 15:02:05.689637  844608 command_runner.go:130] > # 	"SETUID",
	I1114 15:02:05.689650  844608 command_runner.go:130] > # 	"SETPCAP",
	I1114 15:02:05.689661  844608 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1114 15:02:05.689668  844608 command_runner.go:130] > # 	"KILL",
	I1114 15:02:05.689678  844608 command_runner.go:130] > # ]
	I1114 15:02:05.689690  844608 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1114 15:02:05.689704  844608 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:02:05.689713  844608 command_runner.go:130] > # default_sysctls = [
	I1114 15:02:05.689726  844608 command_runner.go:130] > # ]
	I1114 15:02:05.689742  844608 command_runner.go:130] > # List of devices on the host that a
	I1114 15:02:05.689757  844608 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1114 15:02:05.689769  844608 command_runner.go:130] > # allowed_devices = [
	I1114 15:02:05.689779  844608 command_runner.go:130] > # 	"/dev/fuse",
	I1114 15:02:05.689787  844608 command_runner.go:130] > # ]
	I1114 15:02:05.689794  844608 command_runner.go:130] > # List of additional devices. specified as
	I1114 15:02:05.689809  844608 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1114 15:02:05.689820  844608 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1114 15:02:05.689900  844608 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:02:05.689913  844608 command_runner.go:130] > # additional_devices = [
	I1114 15:02:05.689922  844608 command_runner.go:130] > # ]
	I1114 15:02:05.689929  844608 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1114 15:02:05.689936  844608 command_runner.go:130] > # cdi_spec_dirs = [
	I1114 15:02:05.689941  844608 command_runner.go:130] > # 	"/etc/cdi",
	I1114 15:02:05.689953  844608 command_runner.go:130] > # 	"/var/run/cdi",
	I1114 15:02:05.689960  844608 command_runner.go:130] > # ]
	I1114 15:02:05.689972  844608 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1114 15:02:05.689986  844608 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1114 15:02:05.689995  844608 command_runner.go:130] > # Defaults to false.
	I1114 15:02:05.690004  844608 command_runner.go:130] > # device_ownership_from_security_context = false
	I1114 15:02:05.690016  844608 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1114 15:02:05.690029  844608 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1114 15:02:05.690036  844608 command_runner.go:130] > # hooks_dir = [
	I1114 15:02:05.690047  844608 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1114 15:02:05.690052  844608 command_runner.go:130] > # ]
	I1114 15:02:05.690069  844608 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1114 15:02:05.690083  844608 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1114 15:02:05.690095  844608 command_runner.go:130] > # its default mounts from the following two files:
	I1114 15:02:05.690107  844608 command_runner.go:130] > #
	I1114 15:02:05.690121  844608 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1114 15:02:05.690134  844608 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1114 15:02:05.690146  844608 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1114 15:02:05.690154  844608 command_runner.go:130] > #
	I1114 15:02:05.690164  844608 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1114 15:02:05.690176  844608 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1114 15:02:05.690190  844608 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1114 15:02:05.690198  844608 command_runner.go:130] > #      only add mounts it finds in this file.
	I1114 15:02:05.690207  844608 command_runner.go:130] > #
	I1114 15:02:05.690214  844608 command_runner.go:130] > # default_mounts_file = ""
	I1114 15:02:05.690226  844608 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1114 15:02:05.690237  844608 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1114 15:02:05.690251  844608 command_runner.go:130] > pids_limit = 1024
	I1114 15:02:05.690264  844608 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1114 15:02:05.690276  844608 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1114 15:02:05.690287  844608 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1114 15:02:05.690302  844608 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1114 15:02:05.690314  844608 command_runner.go:130] > # log_size_max = -1
	I1114 15:02:05.690328  844608 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1114 15:02:05.690337  844608 command_runner.go:130] > # log_to_journald = false
	I1114 15:02:05.690347  844608 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1114 15:02:05.690357  844608 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1114 15:02:05.690367  844608 command_runner.go:130] > # Path to directory for container attach sockets.
	I1114 15:02:05.690378  844608 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1114 15:02:05.690387  844608 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1114 15:02:05.690398  844608 command_runner.go:130] > # bind_mount_prefix = ""
	I1114 15:02:05.690407  844608 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1114 15:02:05.690417  844608 command_runner.go:130] > # read_only = false
	I1114 15:02:05.690424  844608 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1114 15:02:05.690432  844608 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1114 15:02:05.690436  844608 command_runner.go:130] > # live configuration reload.
	I1114 15:02:05.690443  844608 command_runner.go:130] > # log_level = "info"
	I1114 15:02:05.690448  844608 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1114 15:02:05.690455  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:02:05.690459  844608 command_runner.go:130] > # log_filter = ""
	I1114 15:02:05.690475  844608 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1114 15:02:05.690488  844608 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1114 15:02:05.690496  844608 command_runner.go:130] > # separated by comma.
	I1114 15:02:05.690505  844608 command_runner.go:130] > # uid_mappings = ""
	I1114 15:02:05.690518  844608 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1114 15:02:05.690528  844608 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1114 15:02:05.690537  844608 command_runner.go:130] > # separated by comma.
	I1114 15:02:05.690544  844608 command_runner.go:130] > # gid_mappings = ""
	I1114 15:02:05.690557  844608 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1114 15:02:05.690570  844608 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:02:05.690583  844608 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:02:05.690590  844608 command_runner.go:130] > # minimum_mappable_uid = -1
	I1114 15:02:05.690603  844608 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1114 15:02:05.690614  844608 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:02:05.690627  844608 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:02:05.690637  844608 command_runner.go:130] > # minimum_mappable_gid = -1
	I1114 15:02:05.690647  844608 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1114 15:02:05.690658  844608 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1114 15:02:05.690667  844608 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1114 15:02:05.690673  844608 command_runner.go:130] > # ctr_stop_timeout = 30
	I1114 15:02:05.690679  844608 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1114 15:02:05.690687  844608 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1114 15:02:05.690692  844608 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1114 15:02:05.690696  844608 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1114 15:02:05.690728  844608 command_runner.go:130] > drop_infra_ctr = false
	I1114 15:02:05.690740  844608 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1114 15:02:05.690750  844608 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1114 15:02:05.690765  844608 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1114 15:02:05.690775  844608 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1114 15:02:05.690784  844608 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1114 15:02:05.690795  844608 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1114 15:02:05.690804  844608 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1114 15:02:05.690816  844608 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1114 15:02:05.690826  844608 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1114 15:02:05.690836  844608 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1114 15:02:05.690848  844608 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1114 15:02:05.690867  844608 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1114 15:02:05.690877  844608 command_runner.go:130] > # default_runtime = "runc"
	I1114 15:02:05.690887  844608 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1114 15:02:05.690904  844608 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1114 15:02:05.690916  844608 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1114 15:02:05.690921  844608 command_runner.go:130] > # creation as a file is not desired either.
	I1114 15:02:05.690929  844608 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1114 15:02:05.690936  844608 command_runner.go:130] > # the hostname is being managed dynamically.
	I1114 15:02:05.690941  844608 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1114 15:02:05.690944  844608 command_runner.go:130] > # ]
	I1114 15:02:05.690950  844608 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1114 15:02:05.690957  844608 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1114 15:02:05.690963  844608 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1114 15:02:05.690971  844608 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1114 15:02:05.690975  844608 command_runner.go:130] > #
	I1114 15:02:05.690982  844608 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1114 15:02:05.690987  844608 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1114 15:02:05.690992  844608 command_runner.go:130] > #  runtime_type = "oci"
	I1114 15:02:05.690999  844608 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1114 15:02:05.691006  844608 command_runner.go:130] > #  privileged_without_host_devices = false
	I1114 15:02:05.691010  844608 command_runner.go:130] > #  allowed_annotations = []
	I1114 15:02:05.691014  844608 command_runner.go:130] > # Where:
	I1114 15:02:05.691019  844608 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1114 15:02:05.691027  844608 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1114 15:02:05.691033  844608 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1114 15:02:05.691039  844608 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1114 15:02:05.691045  844608 command_runner.go:130] > #   in $PATH.
	I1114 15:02:05.691051  844608 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1114 15:02:05.691058  844608 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1114 15:02:05.691069  844608 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1114 15:02:05.691075  844608 command_runner.go:130] > #   state.
	I1114 15:02:05.691081  844608 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1114 15:02:05.691089  844608 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1114 15:02:05.691095  844608 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1114 15:02:05.691103  844608 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1114 15:02:05.691109  844608 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1114 15:02:05.691119  844608 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1114 15:02:05.691124  844608 command_runner.go:130] > #   The currently recognized values are:
	I1114 15:02:05.691134  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1114 15:02:05.691142  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1114 15:02:05.691151  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1114 15:02:05.691157  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1114 15:02:05.691166  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1114 15:02:05.691173  844608 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1114 15:02:05.691181  844608 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1114 15:02:05.691187  844608 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1114 15:02:05.691195  844608 command_runner.go:130] > #   should be moved to the container's cgroup
	I1114 15:02:05.691199  844608 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1114 15:02:05.691203  844608 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1114 15:02:05.691209  844608 command_runner.go:130] > runtime_type = "oci"
	I1114 15:02:05.691213  844608 command_runner.go:130] > runtime_root = "/run/runc"
	I1114 15:02:05.691219  844608 command_runner.go:130] > runtime_config_path = ""
	I1114 15:02:05.691223  844608 command_runner.go:130] > monitor_path = ""
	I1114 15:02:05.691229  844608 command_runner.go:130] > monitor_cgroup = ""
	I1114 15:02:05.691235  844608 command_runner.go:130] > monitor_exec_cgroup = ""
	I1114 15:02:05.691243  844608 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1114 15:02:05.691247  844608 command_runner.go:130] > # running containers
	I1114 15:02:05.691254  844608 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1114 15:02:05.691260  844608 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1114 15:02:05.691338  844608 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1114 15:02:05.691354  844608 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1114 15:02:05.691362  844608 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1114 15:02:05.691370  844608 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1114 15:02:05.691379  844608 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1114 15:02:05.691386  844608 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1114 15:02:05.691397  844608 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1114 15:02:05.691404  844608 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1114 15:02:05.691412  844608 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1114 15:02:05.691421  844608 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1114 15:02:05.691429  844608 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1114 15:02:05.691437  844608 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1114 15:02:05.691447  844608 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1114 15:02:05.691456  844608 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1114 15:02:05.691468  844608 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1114 15:02:05.691481  844608 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1114 15:02:05.691496  844608 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1114 15:02:05.691511  844608 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1114 15:02:05.691520  844608 command_runner.go:130] > # Example:
	I1114 15:02:05.691528  844608 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1114 15:02:05.691539  844608 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1114 15:02:05.691547  844608 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1114 15:02:05.691558  844608 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1114 15:02:05.691564  844608 command_runner.go:130] > # cpuset = 0
	I1114 15:02:05.691572  844608 command_runner.go:130] > # cpushares = "0-1"
	I1114 15:02:05.691581  844608 command_runner.go:130] > # Where:
	I1114 15:02:05.691588  844608 command_runner.go:130] > # The workload name is workload-type.
	I1114 15:02:05.691598  844608 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1114 15:02:05.691604  844608 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1114 15:02:05.691611  844608 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1114 15:02:05.691619  844608 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1114 15:02:05.691628  844608 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1114 15:02:05.691634  844608 command_runner.go:130] > # 
	I1114 15:02:05.691641  844608 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1114 15:02:05.691647  844608 command_runner.go:130] > #
	I1114 15:02:05.691652  844608 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1114 15:02:05.691658  844608 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1114 15:02:05.691665  844608 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1114 15:02:05.691671  844608 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1114 15:02:05.691677  844608 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1114 15:02:05.691681  844608 command_runner.go:130] > [crio.image]
	I1114 15:02:05.691689  844608 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1114 15:02:05.691694  844608 command_runner.go:130] > # default_transport = "docker://"
	I1114 15:02:05.691700  844608 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1114 15:02:05.691706  844608 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:02:05.691713  844608 command_runner.go:130] > # global_auth_file = ""
	I1114 15:02:05.691717  844608 command_runner.go:130] > # The image used to instantiate infra containers.
	I1114 15:02:05.691723  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:02:05.691730  844608 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1114 15:02:05.691740  844608 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1114 15:02:05.691749  844608 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:02:05.691754  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:02:05.691760  844608 command_runner.go:130] > # pause_image_auth_file = ""
	I1114 15:02:05.691766  844608 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1114 15:02:05.691771  844608 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1114 15:02:05.691777  844608 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1114 15:02:05.691783  844608 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1114 15:02:05.691786  844608 command_runner.go:130] > # pause_command = "/pause"
	I1114 15:02:05.691792  844608 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1114 15:02:05.691798  844608 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1114 15:02:05.691804  844608 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1114 15:02:05.691810  844608 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1114 15:02:05.691815  844608 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1114 15:02:05.691819  844608 command_runner.go:130] > # signature_policy = ""
	I1114 15:02:05.691824  844608 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1114 15:02:05.691830  844608 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1114 15:02:05.691834  844608 command_runner.go:130] > # changing them here.
	I1114 15:02:05.691839  844608 command_runner.go:130] > # insecure_registries = [
	I1114 15:02:05.691842  844608 command_runner.go:130] > # ]
	I1114 15:02:05.691874  844608 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1114 15:02:05.691883  844608 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1114 15:02:05.691890  844608 command_runner.go:130] > # image_volumes = "mkdir"
	I1114 15:02:05.691898  844608 command_runner.go:130] > # Temporary directory to use for storing big files
	I1114 15:02:05.691905  844608 command_runner.go:130] > # big_files_temporary_dir = ""
	I1114 15:02:05.691913  844608 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1114 15:02:05.691920  844608 command_runner.go:130] > # CNI plugins.
	I1114 15:02:05.691926  844608 command_runner.go:130] > [crio.network]
	I1114 15:02:05.691934  844608 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1114 15:02:05.691943  844608 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1114 15:02:05.691949  844608 command_runner.go:130] > # cni_default_network = ""
	I1114 15:02:05.691957  844608 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1114 15:02:05.691964  844608 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1114 15:02:05.691972  844608 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1114 15:02:05.691988  844608 command_runner.go:130] > # plugin_dirs = [
	I1114 15:02:05.691994  844608 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1114 15:02:05.692006  844608 command_runner.go:130] > # ]
	I1114 15:02:05.692019  844608 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1114 15:02:05.692024  844608 command_runner.go:130] > [crio.metrics]
	I1114 15:02:05.692028  844608 command_runner.go:130] > # Globally enable or disable metrics support.
	I1114 15:02:05.692035  844608 command_runner.go:130] > enable_metrics = true
	I1114 15:02:05.692039  844608 command_runner.go:130] > # Specify enabled metrics collectors.
	I1114 15:02:05.692044  844608 command_runner.go:130] > # Per default all metrics are enabled.
	I1114 15:02:05.692052  844608 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1114 15:02:05.692061  844608 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1114 15:02:05.692076  844608 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1114 15:02:05.692083  844608 command_runner.go:130] > # metrics_collectors = [
	I1114 15:02:05.692087  844608 command_runner.go:130] > # 	"operations",
	I1114 15:02:05.692103  844608 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1114 15:02:05.692110  844608 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1114 15:02:05.692115  844608 command_runner.go:130] > # 	"operations_errors",
	I1114 15:02:05.692119  844608 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1114 15:02:05.692123  844608 command_runner.go:130] > # 	"image_pulls_by_name",
	I1114 15:02:05.692127  844608 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1114 15:02:05.692134  844608 command_runner.go:130] > # 	"image_pulls_failures",
	I1114 15:02:05.692141  844608 command_runner.go:130] > # 	"image_pulls_successes",
	I1114 15:02:05.692145  844608 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1114 15:02:05.692149  844608 command_runner.go:130] > # 	"image_layer_reuse",
	I1114 15:02:05.692156  844608 command_runner.go:130] > # 	"containers_oom_total",
	I1114 15:02:05.692160  844608 command_runner.go:130] > # 	"containers_oom",
	I1114 15:02:05.692166  844608 command_runner.go:130] > # 	"processes_defunct",
	I1114 15:02:05.692170  844608 command_runner.go:130] > # 	"operations_total",
	I1114 15:02:05.692176  844608 command_runner.go:130] > # 	"operations_latency_seconds",
	I1114 15:02:05.692181  844608 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1114 15:02:05.692186  844608 command_runner.go:130] > # 	"operations_errors_total",
	I1114 15:02:05.692190  844608 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1114 15:02:05.692197  844608 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1114 15:02:05.692201  844608 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1114 15:02:05.692206  844608 command_runner.go:130] > # 	"image_pulls_success_total",
	I1114 15:02:05.692212  844608 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1114 15:02:05.692216  844608 command_runner.go:130] > # 	"containers_oom_count_total",
	I1114 15:02:05.692222  844608 command_runner.go:130] > # ]
	I1114 15:02:05.692229  844608 command_runner.go:130] > # The port on which the metrics server will listen.
	I1114 15:02:05.692236  844608 command_runner.go:130] > # metrics_port = 9090
	I1114 15:02:05.692241  844608 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1114 15:02:05.692245  844608 command_runner.go:130] > # metrics_socket = ""
	I1114 15:02:05.692250  844608 command_runner.go:130] > # The certificate for the secure metrics server.
	I1114 15:02:05.692258  844608 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1114 15:02:05.692264  844608 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1114 15:02:05.692269  844608 command_runner.go:130] > # certificate on any modification event.
	I1114 15:02:05.692273  844608 command_runner.go:130] > # metrics_cert = ""
	I1114 15:02:05.692280  844608 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1114 15:02:05.692285  844608 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1114 15:02:05.692292  844608 command_runner.go:130] > # metrics_key = ""
	I1114 15:02:05.692297  844608 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1114 15:02:05.692301  844608 command_runner.go:130] > [crio.tracing]
	I1114 15:02:05.692306  844608 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1114 15:02:05.692314  844608 command_runner.go:130] > # enable_tracing = false
	I1114 15:02:05.692322  844608 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1114 15:02:05.692329  844608 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1114 15:02:05.692336  844608 command_runner.go:130] > # Number of samples to collect per million spans.
	I1114 15:02:05.692341  844608 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1114 15:02:05.692346  844608 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1114 15:02:05.692352  844608 command_runner.go:130] > [crio.stats]
	I1114 15:02:05.692358  844608 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1114 15:02:05.692363  844608 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1114 15:02:05.692369  844608 command_runner.go:130] > # stats_collection_period = 0
	I1114 15:02:05.692493  844608 command_runner.go:130] ! time="2023-11-14 15:02:05.665207042Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1114 15:02:05.692518  844608 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1114 15:02:05.692904  844608 cni.go:84] Creating CNI manager for ""
	I1114 15:02:05.692925  844608 cni.go:136] 1 nodes found, recommending kindnet
	I1114 15:02:05.692947  844608 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:02:05.692969  844608 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.63 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-627820 NodeName:multinode-627820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:02:05.693122  844608 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-627820"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:02:05.693214  844608 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-627820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:02:05.693269  844608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:02:05.703000  844608 command_runner.go:130] > kubeadm
	I1114 15:02:05.703018  844608 command_runner.go:130] > kubectl
	I1114 15:02:05.703024  844608 command_runner.go:130] > kubelet
	I1114 15:02:05.703082  844608 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:02:05.703150  844608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:02:05.711924  844608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1114 15:02:05.727264  844608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:02:05.742813  844608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1114 15:02:05.761841  844608 ssh_runner.go:195] Run: grep 192.168.39.63	control-plane.minikube.internal$ /etc/hosts
	I1114 15:02:05.765745  844608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:02:05.777189  844608 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820 for IP: 192.168.39.63
	I1114 15:02:05.777220  844608 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:05.777374  844608 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:02:05.777410  844608 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:02:05.777458  844608 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key
	I1114 15:02:05.777474  844608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt with IP's: []
	I1114 15:02:05.849480  844608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt ...
	I1114 15:02:05.849514  844608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt: {Name:mkca096d5d3ebab86a949f6eaa8b1fdb7e430c25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:05.849713  844608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key ...
	I1114 15:02:05.849729  844608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key: {Name:mkeaa0a874694e8ce8d9213e89601ba0db0fef17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:05.849836  844608 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key.423148a4
	I1114 15:02:05.849852  844608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt.423148a4 with IP's: [192.168.39.63 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 15:02:06.101215  844608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt.423148a4 ...
	I1114 15:02:06.101247  844608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt.423148a4: {Name:mk41274858b7c6de5dd8fea72ef1d27218d12743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:06.101429  844608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key.423148a4 ...
	I1114 15:02:06.101450  844608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key.423148a4: {Name:mkea9c44040bd32dbc87ee2ce4d2fb4244fc3534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:06.101552  844608 certs.go:337] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt.423148a4 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt
	I1114 15:02:06.101646  844608 certs.go:341] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key.423148a4 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key
	I1114 15:02:06.101707  844608 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.key
	I1114 15:02:06.101721  844608 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.crt with IP's: []
	I1114 15:02:06.229322  844608 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.crt ...
	I1114 15:02:06.229357  844608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.crt: {Name:mkce0eca65207629ce8cbfb7bba3d4ba77176b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:06.229552  844608 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.key ...
	I1114 15:02:06.229576  844608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.key: {Name:mk0878f4dfd22556e2a111f537bd7008bb1f11ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:06.229686  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1114 15:02:06.229707  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1114 15:02:06.229725  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1114 15:02:06.229739  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1114 15:02:06.229761  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 15:02:06.229772  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 15:02:06.229781  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 15:02:06.229793  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 15:02:06.229843  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:02:06.229882  844608 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:02:06.229893  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:02:06.229923  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:02:06.229948  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:02:06.229975  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:02:06.230015  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:02:06.230043  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /usr/share/ca-certificates/8322112.pem
	I1114 15:02:06.230057  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:02:06.230068  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem -> /usr/share/ca-certificates/832211.pem
	I1114 15:02:06.230676  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:02:06.256700  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:02:06.279980  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:02:06.301697  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:02:06.323508  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:02:06.345806  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:02:06.368657  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:02:06.390793  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:02:06.412478  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:02:06.434428  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:02:06.456312  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:02:06.478063  844608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:02:06.494930  844608 ssh_runner.go:195] Run: openssl version
	I1114 15:02:06.500344  844608 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1114 15:02:06.500540  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:02:06.511126  844608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:02:06.515825  844608 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:02:06.515863  844608 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:02:06.515916  844608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:02:06.521175  844608 command_runner.go:130] > 3ec20f2e
	I1114 15:02:06.521442  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:02:06.531798  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:02:06.542005  844608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:02:06.546406  844608 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:02:06.546647  844608 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:02:06.546699  844608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:02:06.551893  844608 command_runner.go:130] > b5213941
	I1114 15:02:06.552174  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:02:06.562361  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:02:06.572424  844608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:02:06.576822  844608 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:02:06.577067  844608 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:02:06.577138  844608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:02:06.582327  844608 command_runner.go:130] > 51391683
	I1114 15:02:06.582674  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:02:06.592297  844608 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:02:06.596171  844608 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:02:06.596250  844608 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:02:06.596334  844608 kubeadm.go:404] StartCluster: {Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:02:06.596493  844608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:02:06.596574  844608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:02:06.633649  844608 cri.go:89] found id: ""
	I1114 15:02:06.633744  844608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:02:06.641946  844608 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1114 15:02:06.641993  844608 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1114 15:02:06.642004  844608 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1114 15:02:06.642107  844608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:02:06.650295  844608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:02:06.658338  844608 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1114 15:02:06.658365  844608 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1114 15:02:06.658371  844608 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1114 15:02:06.658378  844608 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:02:06.658405  844608 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:02:06.658471  844608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:02:06.774073  844608 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:02:06.774138  844608 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1114 15:02:06.774615  844608 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:02:06.774637  844608 command_runner.go:130] > [preflight] Running pre-flight checks
	I1114 15:02:07.055553  844608 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:02:07.055612  844608 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:02:07.055789  844608 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:02:07.055818  844608 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:02:07.055981  844608 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:02:07.055992  844608 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:02:07.300607  844608 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:02:07.300648  844608 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:02:07.393775  844608 out.go:204]   - Generating certificates and keys ...
	I1114 15:02:07.393896  844608 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1114 15:02:07.394020  844608 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:02:07.394151  844608 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:02:07.394182  844608 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1114 15:02:07.471577  844608 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 15:02:07.471619  844608 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 15:02:07.711610  844608 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 15:02:07.711644  844608 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1114 15:02:07.938335  844608 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 15:02:07.938385  844608 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1114 15:02:08.088264  844608 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 15:02:08.088375  844608 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1114 15:02:08.406311  844608 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 15:02:08.406357  844608 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1114 15:02:08.406546  844608 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-627820] and IPs [192.168.39.63 127.0.0.1 ::1]
	I1114 15:02:08.406579  844608 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-627820] and IPs [192.168.39.63 127.0.0.1 ::1]
	I1114 15:02:08.537408  844608 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 15:02:08.537435  844608 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1114 15:02:08.537629  844608 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-627820] and IPs [192.168.39.63 127.0.0.1 ::1]
	I1114 15:02:08.537649  844608 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-627820] and IPs [192.168.39.63 127.0.0.1 ::1]
	I1114 15:02:08.728258  844608 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 15:02:08.728292  844608 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 15:02:08.962157  844608 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 15:02:08.962190  844608 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 15:02:09.058633  844608 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 15:02:09.058669  844608 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1114 15:02:09.058884  844608 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:02:09.058907  844608 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:02:09.481846  844608 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:02:09.481881  844608 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:02:09.605490  844608 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:02:09.605538  844608 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:02:09.703632  844608 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:02:09.703673  844608 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:02:09.847736  844608 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:02:09.847802  844608 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:02:09.848564  844608 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:02:09.848584  844608 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:02:09.853532  844608 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:02:09.853547  844608 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:02:09.855416  844608 out.go:204]   - Booting up control plane ...
	I1114 15:02:09.855536  844608 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:02:09.855598  844608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:02:09.855688  844608 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:02:09.855709  844608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:02:09.855806  844608 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:02:09.855818  844608 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:02:09.872319  844608 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:02:09.872359  844608 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:02:09.873350  844608 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:02:09.873376  844608 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:02:09.873454  844608 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:02:09.873467  844608 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 15:02:09.999933  844608 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:02:09.999986  844608 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:02:18.003936  844608 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005466 seconds
	I1114 15:02:18.003973  844608 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.005466 seconds
	I1114 15:02:18.004082  844608 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:02:18.004091  844608 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:02:18.030634  844608 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:02:18.030643  844608 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:02:18.564495  844608 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:02:18.564528  844608 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:02:18.564757  844608 kubeadm.go:322] [mark-control-plane] Marking the node multinode-627820 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:02:18.564788  844608 command_runner.go:130] > [mark-control-plane] Marking the node multinode-627820 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:02:19.079989  844608 kubeadm.go:322] [bootstrap-token] Using token: pt2uu9.6uw5zkhbsv4acd3g
	I1114 15:02:19.081603  844608 out.go:204]   - Configuring RBAC rules ...
	I1114 15:02:19.080087  844608 command_runner.go:130] > [bootstrap-token] Using token: pt2uu9.6uw5zkhbsv4acd3g
	I1114 15:02:19.081756  844608 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:02:19.081835  844608 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:02:19.089353  844608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:02:19.089377  844608 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:02:19.097634  844608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:02:19.097681  844608 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:02:19.101179  844608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:02:19.101206  844608 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:02:19.104394  844608 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:02:19.104412  844608 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:02:19.111524  844608 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:02:19.111540  844608 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:02:19.122790  844608 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:02:19.122805  844608 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:02:19.393297  844608 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:02:19.393356  844608 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1114 15:02:19.496158  844608 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:02:19.496192  844608 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1114 15:02:19.496220  844608 kubeadm.go:322] 
	I1114 15:02:19.496277  844608 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:02:19.496288  844608 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1114 15:02:19.496293  844608 kubeadm.go:322] 
	I1114 15:02:19.496392  844608 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:02:19.496403  844608 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1114 15:02:19.496408  844608 kubeadm.go:322] 
	I1114 15:02:19.496466  844608 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:02:19.496496  844608 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1114 15:02:19.496575  844608 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:02:19.496585  844608 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:02:19.496675  844608 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:02:19.496699  844608 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:02:19.496706  844608 kubeadm.go:322] 
	I1114 15:02:19.496795  844608 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:02:19.496806  844608 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1114 15:02:19.496824  844608 kubeadm.go:322] 
	I1114 15:02:19.496905  844608 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:02:19.496925  844608 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:02:19.496944  844608 kubeadm.go:322] 
	I1114 15:02:19.497017  844608 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:02:19.497021  844608 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1114 15:02:19.497129  844608 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:02:19.497141  844608 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:02:19.497242  844608 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:02:19.497257  844608 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:02:19.497266  844608 kubeadm.go:322] 
	I1114 15:02:19.497381  844608 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:02:19.497387  844608 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:02:19.497553  844608 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:02:19.497579  844608 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1114 15:02:19.497605  844608 kubeadm.go:322] 
	I1114 15:02:19.497728  844608 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pt2uu9.6uw5zkhbsv4acd3g \
	I1114 15:02:19.497739  844608 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token pt2uu9.6uw5zkhbsv4acd3g \
	I1114 15:02:19.497869  844608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:02:19.497880  844608 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:02:19.497922  844608 kubeadm.go:322] 	--control-plane 
	I1114 15:02:19.497940  844608 command_runner.go:130] > 	--control-plane 
	I1114 15:02:19.497950  844608 kubeadm.go:322] 
	I1114 15:02:19.498089  844608 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:02:19.498103  844608 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:02:19.498108  844608 kubeadm.go:322] 
	I1114 15:02:19.498222  844608 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pt2uu9.6uw5zkhbsv4acd3g \
	I1114 15:02:19.498219  844608 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pt2uu9.6uw5zkhbsv4acd3g \
	I1114 15:02:19.498373  844608 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:02:19.498397  844608 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:02:19.498577  844608 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:02:19.498592  844608 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:02:19.498612  844608 cni.go:84] Creating CNI manager for ""
	I1114 15:02:19.498623  844608 cni.go:136] 1 nodes found, recommending kindnet
	I1114 15:02:19.500503  844608 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 15:02:19.502020  844608 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 15:02:19.509393  844608 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 15:02:19.509411  844608 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1114 15:02:19.509417  844608 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1114 15:02:19.509425  844608 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:02:19.509435  844608 command_runner.go:130] > Access: 2023-11-14 15:01:48.037963128 +0000
	I1114 15:02:19.509440  844608 command_runner.go:130] > Modify: 2023-11-09 04:45:09.000000000 +0000
	I1114 15:02:19.509445  844608 command_runner.go:130] > Change: 2023-11-14 15:01:46.199963128 +0000
	I1114 15:02:19.509449  844608 command_runner.go:130] >  Birth: -
	I1114 15:02:19.509834  844608 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 15:02:19.509849  844608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 15:02:19.581274  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 15:02:20.542113  844608 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1114 15:02:20.542141  844608 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1114 15:02:20.542150  844608 command_runner.go:130] > serviceaccount/kindnet created
	I1114 15:02:20.542154  844608 command_runner.go:130] > daemonset.apps/kindnet created
	I1114 15:02:20.542272  844608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:02:20.542339  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=multinode-627820 minikube.k8s.io/updated_at=2023_11_14T15_02_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:20.542339  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:20.715192  844608 command_runner.go:130] > node/multinode-627820 labeled
	I1114 15:02:20.716648  844608 command_runner.go:130] > -16
	I1114 15:02:20.716679  844608 ops.go:34] apiserver oom_adj: -16
	I1114 15:02:20.716709  844608 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1114 15:02:20.716837  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:20.820023  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:20.822132  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:20.904368  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:21.405193  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:21.482323  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:21.905198  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:21.993645  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:22.405292  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:22.482937  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:22.904903  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:22.989008  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:23.405674  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:23.492402  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:23.904942  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:23.993712  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:24.405314  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:24.491419  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:24.905006  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:25.006800  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:25.405385  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:25.494188  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:25.904686  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:25.990806  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:26.405515  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:26.500335  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:26.904884  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:26.989500  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:27.405200  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:27.488895  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:27.905514  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:27.983564  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:28.405263  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:28.496309  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:28.904890  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:28.987919  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:29.405623  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:29.520363  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:29.905458  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:30.005817  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:30.405383  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:30.494537  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:30.905380  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:30.991655  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:31.405261  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:31.577201  844608 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 15:02:31.904714  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:02:31.991479  844608 command_runner.go:130] > NAME      SECRETS   AGE
	I1114 15:02:31.992180  844608 command_runner.go:130] > default   0         0s
	I1114 15:02:31.994118  844608 kubeadm.go:1081] duration metric: took 11.451844403s to wait for elevateKubeSystemPrivileges.
	I1114 15:02:31.994151  844608 kubeadm.go:406] StartCluster complete in 25.397839152s
	I1114 15:02:31.994186  844608 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:31.994316  844608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:02:31.995591  844608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:02:31.996436  844608 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:02:31.996721  844608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:02:31.996724  844608 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:02:31.996961  844608 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:02:31.996977  844608 addons.go:69] Setting default-storageclass=true in profile "multinode-627820"
	I1114 15:02:31.996998  844608 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-627820"
	I1114 15:02:31.996963  844608 addons.go:69] Setting storage-provisioner=true in profile "multinode-627820"
	I1114 15:02:31.997026  844608 addons.go:231] Setting addon storage-provisioner=true in "multinode-627820"
	I1114 15:02:31.997102  844608 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:02:31.997104  844608 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:02:31.997930  844608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:02:31.997967  844608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:02:31.997932  844608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:02:31.998026  844608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:02:31.998402  844608 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 15:02:31.998682  844608 round_trippers.go:463] GET https://192.168.39.63:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 15:02:31.998703  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:31.998716  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:31.998725  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:32.013833  844608 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1114 15:02:32.013862  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:32.013869  844608 round_trippers.go:580]     Content-Length: 291
	I1114 15:02:32.013875  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:32 GMT
	I1114 15:02:32.013880  844608 round_trippers.go:580]     Audit-Id: a348c6c5-a686-4675-8939-52fd5582766d
	I1114 15:02:32.013886  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:32.013895  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:32.013904  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:32.013918  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:32.013962  844608 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"57bccca2-f0e4-486c-b5a0-3985938d2dae","resourceVersion":"344","creationTimestamp":"2023-11-14T15:02:19Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1114 15:02:32.014275  844608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I1114 15:02:32.014537  844608 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"57bccca2-f0e4-486c-b5a0-3985938d2dae","resourceVersion":"344","creationTimestamp":"2023-11-14T15:02:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1114 15:02:32.014628  844608 round_trippers.go:463] PUT https://192.168.39.63:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 15:02:32.014644  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:32.014660  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:32.014671  844608 round_trippers.go:473]     Content-Type: application/json
	I1114 15:02:32.014681  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:32.014896  844608 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:02:32.015449  844608 main.go:141] libmachine: Using API Version  1
	I1114 15:02:32.015479  844608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:02:32.015909  844608 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:02:32.016124  844608 main.go:141] libmachine: (multinode-627820) Calling .GetState
	I1114 15:02:32.016429  844608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I1114 15:02:32.016834  844608 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:02:32.017390  844608 main.go:141] libmachine: Using API Version  1
	I1114 15:02:32.017432  844608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:02:32.017854  844608 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:02:32.018399  844608 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:02:32.018424  844608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:02:32.018465  844608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:02:32.018664  844608 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:02:32.018945  844608 addons.go:231] Setting addon default-storageclass=true in "multinode-627820"
	I1114 15:02:32.018985  844608 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:02:32.019372  844608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:02:32.019422  844608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:02:32.028500  844608 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1114 15:02:32.028526  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:32.028536  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:32 GMT
	I1114 15:02:32.028544  844608 round_trippers.go:580]     Audit-Id: c46b4a05-fe4a-4358-8282-c6f61ad044c6
	I1114 15:02:32.028552  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:32.028561  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:32.028568  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:32.028575  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:32.028583  844608 round_trippers.go:580]     Content-Length: 291
	I1114 15:02:32.029003  844608 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"57bccca2-f0e4-486c-b5a0-3985938d2dae","resourceVersion":"345","creationTimestamp":"2023-11-14T15:02:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1114 15:02:32.029200  844608 round_trippers.go:463] GET https://192.168.39.63:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 15:02:32.029223  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:32.029233  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:32.029242  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:32.033479  844608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1114 15:02:32.033482  844608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I1114 15:02:32.033905  844608 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:02:32.034043  844608 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:02:32.034367  844608 main.go:141] libmachine: Using API Version  1
	I1114 15:02:32.034395  844608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:02:32.034551  844608 main.go:141] libmachine: Using API Version  1
	I1114 15:02:32.034573  844608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:02:32.034758  844608 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:02:32.034928  844608 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:02:32.034927  844608 main.go:141] libmachine: (multinode-627820) Calling .GetState
	I1114 15:02:32.035616  844608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:02:32.035674  844608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:02:32.036811  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:02:32.038706  844608 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:02:32.040066  844608 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:02:32.040081  844608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:02:32.040096  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:02:32.040955  844608 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1114 15:02:32.040980  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:32.040990  844608 round_trippers.go:580]     Audit-Id: 20cedacd-92a0-46d7-abf1-ea2fa222ca97
	I1114 15:02:32.040999  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:32.041008  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:32.041015  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:32.041029  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:32.041041  844608 round_trippers.go:580]     Content-Length: 291
	I1114 15:02:32.041049  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:32 GMT
	I1114 15:02:32.043042  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:02:32.043294  844608 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"57bccca2-f0e4-486c-b5a0-3985938d2dae","resourceVersion":"345","creationTimestamp":"2023-11-14T15:02:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1114 15:02:32.043408  844608 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-627820" context rescaled to 1 replicas
	I1114 15:02:32.043444  844608 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:02:32.045067  844608 out.go:177] * Verifying Kubernetes components...
	I1114 15:02:32.043638  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:02:32.043892  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:02:32.045180  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:02:32.045272  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:02:32.047063  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:02:32.047271  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:02:32.047477  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:02:32.051811  844608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I1114 15:02:32.052208  844608 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:02:32.052651  844608 main.go:141] libmachine: Using API Version  1
	I1114 15:02:32.052679  844608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:02:32.052996  844608 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:02:32.053179  844608 main.go:141] libmachine: (multinode-627820) Calling .GetState
	I1114 15:02:32.054477  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:02:32.054706  844608 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:02:32.054721  844608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:02:32.054740  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:02:32.057487  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:02:32.058000  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:02:32.058034  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:02:32.058167  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:02:32.058375  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:02:32.058550  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:02:32.058673  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:02:32.204813  844608 command_runner.go:130] > apiVersion: v1
	I1114 15:02:32.204859  844608 command_runner.go:130] > data:
	I1114 15:02:32.204866  844608 command_runner.go:130] >   Corefile: |
	I1114 15:02:32.204871  844608 command_runner.go:130] >     .:53 {
	I1114 15:02:32.204877  844608 command_runner.go:130] >         errors
	I1114 15:02:32.204884  844608 command_runner.go:130] >         health {
	I1114 15:02:32.204892  844608 command_runner.go:130] >            lameduck 5s
	I1114 15:02:32.204898  844608 command_runner.go:130] >         }
	I1114 15:02:32.204905  844608 command_runner.go:130] >         ready
	I1114 15:02:32.204913  844608 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1114 15:02:32.204917  844608 command_runner.go:130] >            pods insecure
	I1114 15:02:32.204931  844608 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1114 15:02:32.204940  844608 command_runner.go:130] >            ttl 30
	I1114 15:02:32.204943  844608 command_runner.go:130] >         }
	I1114 15:02:32.204948  844608 command_runner.go:130] >         prometheus :9153
	I1114 15:02:32.204953  844608 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1114 15:02:32.204972  844608 command_runner.go:130] >            max_concurrent 1000
	I1114 15:02:32.204982  844608 command_runner.go:130] >         }
	I1114 15:02:32.204988  844608 command_runner.go:130] >         cache 30
	I1114 15:02:32.205002  844608 command_runner.go:130] >         loop
	I1114 15:02:32.205009  844608 command_runner.go:130] >         reload
	I1114 15:02:32.205016  844608 command_runner.go:130] >         loadbalance
	I1114 15:02:32.205025  844608 command_runner.go:130] >     }
	I1114 15:02:32.205031  844608 command_runner.go:130] > kind: ConfigMap
	I1114 15:02:32.205037  844608 command_runner.go:130] > metadata:
	I1114 15:02:32.205049  844608 command_runner.go:130] >   creationTimestamp: "2023-11-14T15:02:19Z"
	I1114 15:02:32.205056  844608 command_runner.go:130] >   name: coredns
	I1114 15:02:32.205064  844608 command_runner.go:130] >   namespace: kube-system
	I1114 15:02:32.205070  844608 command_runner.go:130] >   resourceVersion: "230"
	I1114 15:02:32.205075  844608 command_runner.go:130] >   uid: 4cf214f8-5e9c-406e-819b-2e5b336d9fc3
	I1114 15:02:32.205210  844608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:02:32.205642  844608 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:02:32.206002  844608 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:02:32.206388  844608 node_ready.go:35] waiting up to 6m0s for node "multinode-627820" to be "Ready" ...
	I1114 15:02:32.206533  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:32.206546  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:32.206557  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:32.206567  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:32.209348  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:32.209369  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:32.209379  844608 round_trippers.go:580]     Audit-Id: d76bf361-2268-4f41-ab7b-d40b1b757043
	I1114 15:02:32.209388  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:32.209396  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:32.209404  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:32.209415  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:32.209437  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:32 GMT
	I1114 15:02:32.210243  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:32.211141  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:32.211163  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:32.211174  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:32.211185  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:32.213046  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:02:32.213066  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:32.213077  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:32 GMT
	I1114 15:02:32.213087  844608 round_trippers.go:580]     Audit-Id: 23d2b709-a5f8-4f3f-97e6-cd897d6ae603
	I1114 15:02:32.213100  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:32.213110  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:32.213121  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:32.213133  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:32.213295  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:32.282551  844608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:02:32.289648  844608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:02:32.713971  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:32.714000  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:32.714009  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:32.714015  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:32.720115  844608 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 15:02:32.720137  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:32.720144  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:32 GMT
	I1114 15:02:32.720149  844608 round_trippers.go:580]     Audit-Id: b0fc44ac-a165-421f-8f88-e878651f1850
	I1114 15:02:32.720154  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:32.720159  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:32.720164  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:32.720169  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:32.720279  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:33.077693  844608 command_runner.go:130] > configmap/coredns replaced
	I1114 15:02:33.080236  844608 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 15:02:33.128969  844608 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1114 15:02:33.138832  844608 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1114 15:02:33.149811  844608 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1114 15:02:33.160916  844608 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1114 15:02:33.168486  844608 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1114 15:02:33.182122  844608 command_runner.go:130] > pod/storage-provisioner created
	I1114 15:02:33.184809  844608 main.go:141] libmachine: Making call to close driver server
	I1114 15:02:33.184836  844608 main.go:141] libmachine: (multinode-627820) Calling .Close
	I1114 15:02:33.184814  844608 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1114 15:02:33.184929  844608 main.go:141] libmachine: Making call to close driver server
	I1114 15:02:33.184952  844608 main.go:141] libmachine: (multinode-627820) Calling .Close
	I1114 15:02:33.185174  844608 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:02:33.185188  844608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:02:33.185196  844608 main.go:141] libmachine: Making call to close driver server
	I1114 15:02:33.185203  844608 main.go:141] libmachine: (multinode-627820) Calling .Close
	I1114 15:02:33.185322  844608 main.go:141] libmachine: (multinode-627820) DBG | Closing plugin on server side
	I1114 15:02:33.185320  844608 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:02:33.185343  844608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:02:33.185356  844608 main.go:141] libmachine: Making call to close driver server
	I1114 15:02:33.185366  844608 main.go:141] libmachine: (multinode-627820) Calling .Close
	I1114 15:02:33.185413  844608 main.go:141] libmachine: (multinode-627820) DBG | Closing plugin on server side
	I1114 15:02:33.185461  844608 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:02:33.185483  844608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:02:33.185736  844608 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:02:33.185762  844608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:02:33.185876  844608 round_trippers.go:463] GET https://192.168.39.63:8443/apis/storage.k8s.io/v1/storageclasses
	I1114 15:02:33.185889  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:33.185901  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:33.185910  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:33.190457  844608 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 15:02:33.190486  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:33.190495  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:33.190504  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:33.190512  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:33.190519  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:33.190527  844608 round_trippers.go:580]     Content-Length: 1273
	I1114 15:02:33.190540  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:33 GMT
	I1114 15:02:33.190548  844608 round_trippers.go:580]     Audit-Id: 746ec487-3911-407f-8292-867f07d6bfc3
	I1114 15:02:33.190710  844608 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"366"},"items":[{"metadata":{"name":"standard","uid":"4af107b6-eed3-41a6-a294-4a23aca13ec7","resourceVersion":"358","creationTimestamp":"2023-11-14T15:02:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-14T15:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1114 15:02:33.191166  844608 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4af107b6-eed3-41a6-a294-4a23aca13ec7","resourceVersion":"358","creationTimestamp":"2023-11-14T15:02:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-14T15:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1114 15:02:33.191226  844608 round_trippers.go:463] PUT https://192.168.39.63:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1114 15:02:33.191237  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:33.191245  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:33.191251  844608 round_trippers.go:473]     Content-Type: application/json
	I1114 15:02:33.191259  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:33.195830  844608 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 15:02:33.195849  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:33.195859  844608 round_trippers.go:580]     Audit-Id: 95e782df-9db0-4ba2-913a-241f995bf451
	I1114 15:02:33.195867  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:33.195875  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:33.195882  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:33.195888  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:33.195893  844608 round_trippers.go:580]     Content-Length: 1220
	I1114 15:02:33.195899  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:33 GMT
	I1114 15:02:33.195938  844608 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4af107b6-eed3-41a6-a294-4a23aca13ec7","resourceVersion":"358","creationTimestamp":"2023-11-14T15:02:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-14T15:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1114 15:02:33.196082  844608 main.go:141] libmachine: Making call to close driver server
	I1114 15:02:33.196096  844608 main.go:141] libmachine: (multinode-627820) Calling .Close
	I1114 15:02:33.196314  844608 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:02:33.196331  844608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:02:33.196334  844608 main.go:141] libmachine: (multinode-627820) DBG | Closing plugin on server side
	I1114 15:02:33.198397  844608 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1114 15:02:33.200399  844608 addons.go:502] enable addons completed in 1.203722182s: enabled=[storage-provisioner default-storageclass]
	I1114 15:02:33.213932  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:33.213947  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:33.213954  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:33.213960  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:33.215817  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:02:33.215833  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:33.215842  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:33.215850  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:33 GMT
	I1114 15:02:33.215857  844608 round_trippers.go:580]     Audit-Id: 6d31cd3a-dc5a-4f5c-862a-416af9a33939
	I1114 15:02:33.215866  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:33.215877  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:33.215885  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:33.216046  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:33.714781  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:33.714807  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:33.714816  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:33.714822  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:33.717363  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:33.717382  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:33.717388  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:33 GMT
	I1114 15:02:33.717394  844608 round_trippers.go:580]     Audit-Id: d3dfde0d-d109-4d81-8f03-384492a2e1bd
	I1114 15:02:33.717400  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:33.717408  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:33.717417  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:33.717426  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:33.717650  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:34.214321  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:34.214352  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:34.214361  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:34.214367  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:34.217305  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:34.217328  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:34.217334  844608 round_trippers.go:580]     Audit-Id: 739286f6-09f9-40c9-8ba0-269491f68b26
	I1114 15:02:34.217340  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:34.217345  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:34.217350  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:34.217356  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:34.217363  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:34 GMT
	I1114 15:02:34.217766  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:34.218086  844608 node_ready.go:58] node "multinode-627820" has status "Ready":"False"
	I1114 15:02:34.713842  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:34.713866  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:34.713874  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:34.713881  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:34.716849  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:34.716874  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:34.716881  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:34 GMT
	I1114 15:02:34.716886  844608 round_trippers.go:580]     Audit-Id: cb9449c6-c70b-47e7-ba8a-22c606378b6b
	I1114 15:02:34.716899  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:34.716904  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:34.716910  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:34.716915  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:34.717565  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:35.213929  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:35.213960  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:35.213968  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:35.213981  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:35.216587  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:35.216609  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:35.216616  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:35.216621  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:35 GMT
	I1114 15:02:35.216626  844608 round_trippers.go:580]     Audit-Id: 369c20dc-2bb0-4c05-a956-632507754769
	I1114 15:02:35.216632  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:35.216636  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:35.216641  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:35.216933  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:35.714703  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:35.714742  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:35.714752  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:35.714758  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:35.717699  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:35.717722  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:35.717729  844608 round_trippers.go:580]     Audit-Id: d933ac30-8c37-48ca-9ce5-da94efda39a7
	I1114 15:02:35.717735  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:35.717740  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:35.717745  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:35.717750  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:35.717755  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:35 GMT
	I1114 15:02:35.717928  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:36.214681  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:36.214715  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:36.214729  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:36.214739  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:36.219740  844608 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 15:02:36.219764  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:36.219771  844608 round_trippers.go:580]     Audit-Id: 0b252856-ce75-4ac1-90d4-0d8618014aee
	I1114 15:02:36.219779  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:36.219784  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:36.219789  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:36.219794  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:36.219801  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:36 GMT
	I1114 15:02:36.219981  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:36.220441  844608 node_ready.go:58] node "multinode-627820" has status "Ready":"False"
	I1114 15:02:36.714643  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:36.714667  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:36.714675  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:36.714682  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:36.717175  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:36.717203  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:36.717213  844608 round_trippers.go:580]     Audit-Id: 865942fd-091d-411c-aec8-9e171a442542
	I1114 15:02:36.717221  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:36.717229  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:36.717237  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:36.717245  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:36.717252  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:36 GMT
	I1114 15:02:36.717890  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"319","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1114 15:02:37.214022  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:37.214053  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:37.214064  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:37.214074  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:37.218469  844608 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 15:02:37.218500  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:37.218510  844608 round_trippers.go:580]     Audit-Id: 2f806dfb-4c1d-4163-a405-e164b79d30a6
	I1114 15:02:37.218518  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:37.218526  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:37.218535  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:37.218543  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:37.218556  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:37 GMT
	I1114 15:02:37.218803  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:37.219267  844608 node_ready.go:49] node "multinode-627820" has status "Ready":"True"
	I1114 15:02:37.219296  844608 node_ready.go:38] duration metric: took 5.012858706s waiting for node "multinode-627820" to be "Ready" ...
	I1114 15:02:37.219310  844608 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:02:37.219407  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:02:37.219421  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:37.219431  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:37.219441  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:37.224496  844608 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 15:02:37.224518  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:37.224528  844608 round_trippers.go:580]     Audit-Id: 3a490f84-dc97-4447-8672-2a545e81589d
	I1114 15:02:37.224535  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:37.224542  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:37.224549  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:37.224556  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:37.224564  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:37 GMT
	I1114 15:02:37.229754  844608 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"387"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"387","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52308 chars]
	I1114 15:02:37.234482  844608 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:37.234589  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:02:37.234601  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:37.234613  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:37.234628  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:37.241087  844608 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 15:02:37.241110  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:37.241119  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:37.241126  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:37 GMT
	I1114 15:02:37.241134  844608 round_trippers.go:580]     Audit-Id: def840b1-2504-446e-b9f4-f955e43d7822
	I1114 15:02:37.241142  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:37.241155  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:37.241168  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:37.241534  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"387","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1114 15:02:37.242067  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:37.242083  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:37.242091  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:37.242098  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:37.244431  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:37.244451  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:37.244460  844608 round_trippers.go:580]     Audit-Id: 019c8199-d1a0-45b1-be2b-b13217dc36f3
	I1114 15:02:37.244467  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:37.244475  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:37.244484  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:37.244494  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:37.244506  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:37 GMT
	I1114 15:02:37.244654  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:37.245012  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:02:37.245025  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:37.245033  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:37.245040  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:37.247262  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:37.247282  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:37.247291  844608 round_trippers.go:580]     Audit-Id: c273a098-0ac8-448d-b454-53f0adb7a385
	I1114 15:02:37.247300  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:37.247307  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:37.247313  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:37.247320  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:37.247327  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:37 GMT
	I1114 15:02:37.247654  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"387","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1114 15:02:37.248185  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:37.248205  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:37.248216  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:37.248225  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:37.250298  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:37.250315  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:37.250325  844608 round_trippers.go:580]     Audit-Id: 1742fea2-ce5f-4e15-9599-445d5a5447b8
	I1114 15:02:37.250333  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:37.250341  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:37.250350  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:37.250363  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:37.250375  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:37 GMT
	I1114 15:02:37.250549  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:37.751372  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:02:37.751402  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:37.751411  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:37.751418  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:37.755211  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:37.755238  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:37.755249  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:37.755259  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:37.755268  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:37 GMT
	I1114 15:02:37.755281  844608 round_trippers.go:580]     Audit-Id: 8b1d73c0-1a21-4869-b9a8-67a83bc43ec1
	I1114 15:02:37.755290  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:37.755304  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:37.755631  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"387","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1114 15:02:37.756258  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:37.756279  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:37.756292  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:37.756302  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:37.759334  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:37.759352  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:37.759359  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:37.759365  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:37.759370  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:37 GMT
	I1114 15:02:37.759375  844608 round_trippers.go:580]     Audit-Id: e018bdd9-04b6-4296-85c6-6dd7f703da79
	I1114 15:02:37.759380  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:37.759385  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:37.759587  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:38.251274  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:02:38.251309  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.251323  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.251334  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.254577  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:38.254604  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.254611  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.254617  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.254622  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.254627  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.254632  844608 round_trippers.go:580]     Audit-Id: b0531f7c-ac7f-4bc9-aa61-83bd4476a828
	I1114 15:02:38.254637  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.254746  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"387","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1114 15:02:38.255217  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:38.255232  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.255239  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.255245  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.260899  844608 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 15:02:38.260926  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.260938  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.260946  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.260955  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.260962  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.260973  844608 round_trippers.go:580]     Audit-Id: f491f82f-e4d5-4c77-93f3-10014239c977
	I1114 15:02:38.260981  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.261110  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:38.751742  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:02:38.751768  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.751777  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.751783  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.754553  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:38.754582  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.754592  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.754601  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.754609  844608 round_trippers.go:580]     Audit-Id: 32fd0249-73ee-47fe-8292-a1ba037acf8a
	I1114 15:02:38.754617  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.754629  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.754638  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.754777  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"399","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1114 15:02:38.755379  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:38.755399  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.755410  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.755420  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.758262  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:38.758282  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.758290  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.758295  844608 round_trippers.go:580]     Audit-Id: 0953a74c-e9de-4c9f-9732-2490938e7f70
	I1114 15:02:38.758300  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.758305  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.758310  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.758315  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.758513  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:38.758819  844608 pod_ready.go:92] pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace has status "Ready":"True"
	I1114 15:02:38.758834  844608 pod_ready.go:81] duration metric: took 1.524321731s waiting for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:38.758843  844608 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:38.758891  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-627820
	I1114 15:02:38.758898  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.758905  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.758911  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.761660  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:38.761682  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.761695  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.761703  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.761712  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.761719  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.761725  844608 round_trippers.go:580]     Audit-Id: 53748311-5f13-4ed8-803f-ec4ea7e2342e
	I1114 15:02:38.761730  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.762299  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-627820","namespace":"kube-system","uid":"f7ab1cba-820a-4cad-8607-dcf55b587b77","resourceVersion":"333","creationTimestamp":"2023-11-14T15:02:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.63:2379","kubernetes.io/config.hash":"9e94d5d69871d944e272883491976489","kubernetes.io/config.mirror":"9e94d5d69871d944e272883491976489","kubernetes.io/config.seen":"2023-11-14T15:02:10.404956486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1114 15:02:38.762716  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:38.762730  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.762737  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.762742  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.766029  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:38.766050  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.766057  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.766063  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.766068  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.766073  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.766078  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.766083  844608 round_trippers.go:580]     Audit-Id: 84b8a463-01be-4b23-a880-091d9a26b270
	I1114 15:02:38.766419  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:38.766704  844608 pod_ready.go:92] pod "etcd-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:02:38.766718  844608 pod_ready.go:81] duration metric: took 7.870281ms waiting for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:38.766735  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:38.766797  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-627820
	I1114 15:02:38.766806  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.766814  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.766820  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.769246  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:38.769265  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.769273  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.769281  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.769288  844608 round_trippers.go:580]     Audit-Id: 2662208d-10db-4869-9305-b2a19b83f29d
	I1114 15:02:38.769296  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.769304  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.769315  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.769868  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-627820","namespace":"kube-system","uid":"8a9b9224-3446-46f7-b525-e1f32bb9a33c","resourceVersion":"348","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.63:8443","kubernetes.io/config.hash":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.mirror":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.seen":"2023-11-14T15:02:19.515752674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1114 15:02:38.770258  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:38.770272  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.770279  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.770285  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.771848  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:02:38.771860  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.771865  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.771871  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.771876  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.771881  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.771886  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.771894  844608 round_trippers.go:580]     Audit-Id: dcceaa7a-8f79-499d-8720-d7679642fc48
	I1114 15:02:38.772192  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:38.772443  844608 pod_ready.go:92] pod "kube-apiserver-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:02:38.772455  844608 pod_ready.go:81] duration metric: took 5.714609ms waiting for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:38.772467  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:38.772504  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-627820
	I1114 15:02:38.772511  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.772518  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.772523  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.774336  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:02:38.774354  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.774363  844608 round_trippers.go:580]     Audit-Id: fcd615e7-0609-4f3b-ba89-8589f969ed17
	I1114 15:02:38.774370  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.774378  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.774390  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.774400  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.774409  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.775210  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-627820","namespace":"kube-system","uid":"b4440d06-27f9-4455-ae59-2d8c744b99a2","resourceVersion":"268","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.mirror":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.seen":"2023-11-14T15:02:19.515747223Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1114 15:02:38.814863  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:38.814886  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:38.814894  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:38.814900  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:38.817350  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:38.817368  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:38.817375  844608 round_trippers.go:580]     Audit-Id: 748b9d4d-81d3-4f56-a3c5-2a03b50a347f
	I1114 15:02:38.817380  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:38.817385  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:38.817390  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:38.817396  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:38.817404  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:38 GMT
	I1114 15:02:38.817664  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:38.817995  844608 pod_ready.go:92] pod "kube-controller-manager-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:02:38.818011  844608 pod_ready.go:81] duration metric: took 45.537331ms waiting for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:38.818020  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:39.014478  844608 request.go:629] Waited for 196.370672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:02:39.014560  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:02:39.014566  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:39.014574  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:39.014581  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:39.017121  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:39.017146  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:39.017153  844608 round_trippers.go:580]     Audit-Id: 3230ac77-a165-4b78-8d69-b455a123adf2
	I1114 15:02:39.017159  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:39.017164  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:39.017169  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:39.017174  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:39.017179  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:39 GMT
	I1114 15:02:39.017420  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m24mc","generateName":"kube-proxy-","namespace":"kube-system","uid":"73a6d4c8-2f95-4818-bc62-566099466b42","resourceVersion":"372","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5513 chars]
	I1114 15:02:39.214221  844608 request.go:629] Waited for 196.352196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:39.214321  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:39.214331  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:39.214341  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:39.214361  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:39.217399  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:39.217427  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:39.217436  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:39.217442  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:39 GMT
	I1114 15:02:39.217447  844608 round_trippers.go:580]     Audit-Id: a4cdb95a-9371-40ba-a9f6-0632e887e077
	I1114 15:02:39.217455  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:39.217465  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:39.217478  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:39.217653  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:39.218030  844608 pod_ready.go:92] pod "kube-proxy-m24mc" in "kube-system" namespace has status "Ready":"True"
	I1114 15:02:39.218050  844608 pod_ready.go:81] duration metric: took 400.024359ms waiting for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:39.218060  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:39.415040  844608 request.go:629] Waited for 196.890794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:02:39.415114  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:02:39.415119  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:39.415128  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:39.415134  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:39.418257  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:39.418285  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:39.418295  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:39.418301  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:39.418306  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:39 GMT
	I1114 15:02:39.418311  844608 round_trippers.go:580]     Audit-Id: cfef46ac-b0e0-4e73-b0f8-9643346e1b26
	I1114 15:02:39.418316  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:39.418321  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:39.418732  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-627820","namespace":"kube-system","uid":"ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd","resourceVersion":"281","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.mirror":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.seen":"2023-11-14T15:02:19.515750784Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1114 15:02:39.614526  844608 request.go:629] Waited for 195.389237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:39.614628  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:02:39.614640  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:39.614650  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:39.614660  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:39.617217  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:39.617241  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:39.617252  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:39.617261  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:39.617270  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:39 GMT
	I1114 15:02:39.617276  844608 round_trippers.go:580]     Audit-Id: ebe1cf85-8ce8-435e-9399-87f105ad99f9
	I1114 15:02:39.617281  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:39.617286  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:39.617478  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:02:39.617830  844608 pod_ready.go:92] pod "kube-scheduler-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:02:39.617850  844608 pod_ready.go:81] duration metric: took 399.782917ms waiting for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:02:39.617861  844608 pod_ready.go:38] duration metric: took 2.398530329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:02:39.617879  844608 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:02:39.617936  844608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:02:39.631840  844608 command_runner.go:130] > 1123
	I1114 15:02:39.631911  844608 api_server.go:72] duration metric: took 7.588430609s to wait for apiserver process to appear ...
	I1114 15:02:39.631933  844608 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:02:39.631960  844608 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I1114 15:02:39.637111  844608 api_server.go:279] https://192.168.39.63:8443/healthz returned 200:
	ok
	I1114 15:02:39.637179  844608 round_trippers.go:463] GET https://192.168.39.63:8443/version
	I1114 15:02:39.637189  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:39.637197  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:39.637203  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:39.638217  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:02:39.638233  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:39.638241  844608 round_trippers.go:580]     Audit-Id: 4dad8523-3b0d-4eae-889a-cde565c87b91
	I1114 15:02:39.638249  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:39.638256  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:39.638264  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:39.638277  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:39.638292  844608 round_trippers.go:580]     Content-Length: 264
	I1114 15:02:39.638305  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:39 GMT
	I1114 15:02:39.638332  844608 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1114 15:02:39.638435  844608 api_server.go:141] control plane version: v1.28.3
	I1114 15:02:39.638454  844608 api_server.go:131] duration metric: took 6.514124ms to wait for apiserver health ...
	I1114 15:02:39.638463  844608 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:02:39.814333  844608 request.go:629] Waited for 175.782107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:02:39.814409  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:02:39.814416  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:39.814430  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:39.814444  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:39.818181  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:39.818211  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:39.818221  844608 round_trippers.go:580]     Audit-Id: ef70d0b3-69c4-43ba-9cbf-1e8d54c0e1df
	I1114 15:02:39.818230  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:39.818238  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:39.818245  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:39.818252  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:39.818260  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:39 GMT
	I1114 15:02:39.819747  844608 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"403"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"399","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53955 chars]
	I1114 15:02:39.821419  844608 system_pods.go:59] 8 kube-system pods found
	I1114 15:02:39.821444  844608 system_pods.go:61] "coredns-5dd5756b68-vh8ng" [25afe3b4-014e-4180-9597-fb237d622c81] Running
	I1114 15:02:39.821449  844608 system_pods.go:61] "etcd-multinode-627820" [f7ab1cba-820a-4cad-8607-dcf55b587b77] Running
	I1114 15:02:39.821454  844608 system_pods.go:61] "kindnet-f8xnr" [457f993f-4895-488a-8277-d5187afda5d3] Running
	I1114 15:02:39.821470  844608 system_pods.go:61] "kube-apiserver-multinode-627820" [8a9b9224-3446-46f7-b525-e1f32bb9a33c] Running
	I1114 15:02:39.821485  844608 system_pods.go:61] "kube-controller-manager-multinode-627820" [b4440d06-27f9-4455-ae59-2d8c744b99a2] Running
	I1114 15:02:39.821492  844608 system_pods.go:61] "kube-proxy-m24mc" [73a6d4c8-2f95-4818-bc62-566099466b42] Running
	I1114 15:02:39.821498  844608 system_pods.go:61] "kube-scheduler-multinode-627820" [ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd] Running
	I1114 15:02:39.821507  844608 system_pods.go:61] "storage-provisioner" [f9cf343d-66fc-4de5-b0e0-df38ace21868] Running
	I1114 15:02:39.821514  844608 system_pods.go:74] duration metric: took 183.042443ms to wait for pod list to return data ...
	I1114 15:02:39.821523  844608 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:02:40.014968  844608 request.go:629] Waited for 193.352427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/default/serviceaccounts
	I1114 15:02:40.015043  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/default/serviceaccounts
	I1114 15:02:40.015048  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:40.015056  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:40.015062  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:40.017763  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:02:40.017784  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:40.017791  844608 round_trippers.go:580]     Content-Length: 261
	I1114 15:02:40.017797  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:40 GMT
	I1114 15:02:40.017802  844608 round_trippers.go:580]     Audit-Id: b5ada732-2bee-4293-985f-40b3fcec4771
	I1114 15:02:40.017813  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:40.017824  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:40.017835  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:40.017843  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:40.017871  844608 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"403"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"43d16d63-7ee4-4137-b6e0-aa3fd01e445d","resourceVersion":"329","creationTimestamp":"2023-11-14T15:02:31Z"}}]}
	I1114 15:02:40.018074  844608 default_sa.go:45] found service account: "default"
	I1114 15:02:40.018093  844608 default_sa.go:55] duration metric: took 196.564558ms for default service account to be created ...
	I1114 15:02:40.018102  844608 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:02:40.214580  844608 request.go:629] Waited for 196.3878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:02:40.214647  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:02:40.214652  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:40.214663  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:40.214670  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:40.218118  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:40.218137  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:40.218144  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:40.218150  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:40 GMT
	I1114 15:02:40.218155  844608 round_trippers.go:580]     Audit-Id: 63f2387e-96ce-4922-8c1a-78bb08e46b08
	I1114 15:02:40.218160  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:40.218165  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:40.218170  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:40.219573  844608 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"404"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"399","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53955 chars]
	I1114 15:02:40.221280  844608 system_pods.go:86] 8 kube-system pods found
	I1114 15:02:40.221304  844608 system_pods.go:89] "coredns-5dd5756b68-vh8ng" [25afe3b4-014e-4180-9597-fb237d622c81] Running
	I1114 15:02:40.221309  844608 system_pods.go:89] "etcd-multinode-627820" [f7ab1cba-820a-4cad-8607-dcf55b587b77] Running
	I1114 15:02:40.221313  844608 system_pods.go:89] "kindnet-f8xnr" [457f993f-4895-488a-8277-d5187afda5d3] Running
	I1114 15:02:40.221317  844608 system_pods.go:89] "kube-apiserver-multinode-627820" [8a9b9224-3446-46f7-b525-e1f32bb9a33c] Running
	I1114 15:02:40.221322  844608 system_pods.go:89] "kube-controller-manager-multinode-627820" [b4440d06-27f9-4455-ae59-2d8c744b99a2] Running
	I1114 15:02:40.221325  844608 system_pods.go:89] "kube-proxy-m24mc" [73a6d4c8-2f95-4818-bc62-566099466b42] Running
	I1114 15:02:40.221329  844608 system_pods.go:89] "kube-scheduler-multinode-627820" [ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd] Running
	I1114 15:02:40.221333  844608 system_pods.go:89] "storage-provisioner" [f9cf343d-66fc-4de5-b0e0-df38ace21868] Running
	I1114 15:02:40.221341  844608 system_pods.go:126] duration metric: took 203.232754ms to wait for k8s-apps to be running ...
	I1114 15:02:40.221349  844608 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:02:40.221404  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:02:40.234467  844608 system_svc.go:56] duration metric: took 13.110679ms WaitForService to wait for kubelet.
	I1114 15:02:40.234488  844608 kubeadm.go:581] duration metric: took 8.191013924s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:02:40.234506  844608 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:02:40.414945  844608 request.go:629] Waited for 180.339258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes
	I1114 15:02:40.415027  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes
	I1114 15:02:40.415034  844608 round_trippers.go:469] Request Headers:
	I1114 15:02:40.415045  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:02:40.415059  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:02:40.418246  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:02:40.418272  844608 round_trippers.go:577] Response Headers:
	I1114 15:02:40.418279  844608 round_trippers.go:580]     Audit-Id: a5cbcf4a-7130-4788-99e9-d8de551dccd8
	I1114 15:02:40.418284  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:02:40.418289  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:02:40.418294  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:02:40.418302  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:02:40.418308  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:02:40 GMT
	I1114 15:02:40.418576  844608 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I1114 15:02:40.418964  844608 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:02:40.418991  844608 node_conditions.go:123] node cpu capacity is 2
	I1114 15:02:40.419012  844608 node_conditions.go:105] duration metric: took 184.499937ms to run NodePressure ...
	I1114 15:02:40.419028  844608 start.go:228] waiting for startup goroutines ...
	I1114 15:02:40.419038  844608 start.go:233] waiting for cluster config update ...
	I1114 15:02:40.419058  844608 start.go:242] writing updated cluster config ...
	I1114 15:02:40.421025  844608 out.go:177] 
	I1114 15:02:40.422670  844608 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:02:40.422770  844608 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:02:40.424575  844608 out.go:177] * Starting worker node multinode-627820-m02 in cluster multinode-627820
	I1114 15:02:40.425843  844608 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:02:40.425874  844608 cache.go:56] Caching tarball of preloaded images
	I1114 15:02:40.425983  844608 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:02:40.425996  844608 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:02:40.426082  844608 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:02:40.426243  844608 start.go:365] acquiring machines lock for multinode-627820-m02: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:02:40.426297  844608 start.go:369] acquired machines lock for "multinode-627820-m02" in 34.547µs
	I1114 15:02:40.426318  844608 start.go:93] Provisioning new machine with config: &{Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 15:02:40.426396  844608 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1114 15:02:40.428190  844608 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1114 15:02:40.428284  844608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:02:40.428311  844608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:02:40.442652  844608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I1114 15:02:40.443033  844608 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:02:40.443448  844608 main.go:141] libmachine: Using API Version  1
	I1114 15:02:40.443470  844608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:02:40.443865  844608 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:02:40.444068  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetMachineName
	I1114 15:02:40.444234  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:02:40.444339  844608 start.go:159] libmachine.API.Create for "multinode-627820" (driver="kvm2")
	I1114 15:02:40.444374  844608 client.go:168] LocalClient.Create starting
	I1114 15:02:40.444409  844608 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 15:02:40.444454  844608 main.go:141] libmachine: Decoding PEM data...
	I1114 15:02:40.444478  844608 main.go:141] libmachine: Parsing certificate...
	I1114 15:02:40.444545  844608 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 15:02:40.444583  844608 main.go:141] libmachine: Decoding PEM data...
	I1114 15:02:40.444607  844608 main.go:141] libmachine: Parsing certificate...
	I1114 15:02:40.444639  844608 main.go:141] libmachine: Running pre-create checks...
	I1114 15:02:40.444652  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .PreCreateCheck
	I1114 15:02:40.444862  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetConfigRaw
	I1114 15:02:40.445248  844608 main.go:141] libmachine: Creating machine...
	I1114 15:02:40.445266  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .Create
	I1114 15:02:40.445395  844608 main.go:141] libmachine: (multinode-627820-m02) Creating KVM machine...
	I1114 15:02:40.446588  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found existing default KVM network
	I1114 15:02:40.446756  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found existing private KVM network mk-multinode-627820
	I1114 15:02:40.446907  844608 main.go:141] libmachine: (multinode-627820-m02) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02 ...
	I1114 15:02:40.446943  844608 main.go:141] libmachine: (multinode-627820-m02) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 15:02:40.446971  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:40.446853  844983 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:02:40.447075  844608 main.go:141] libmachine: (multinode-627820-m02) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 15:02:40.682628  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:40.682469  844983 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa...
	I1114 15:02:40.925543  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:40.925375  844983 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/multinode-627820-m02.rawdisk...
	I1114 15:02:40.925585  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Writing magic tar header
	I1114 15:02:40.925608  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Writing SSH key tar header
	I1114 15:02:40.925622  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:40.925490  844983 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02 ...
	I1114 15:02:40.925640  844608 main.go:141] libmachine: (multinode-627820-m02) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02 (perms=drwx------)
	I1114 15:02:40.925652  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02
	I1114 15:02:40.925669  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 15:02:40.925677  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:02:40.925689  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 15:02:40.925711  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 15:02:40.925726  844608 main.go:141] libmachine: (multinode-627820-m02) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 15:02:40.925742  844608 main.go:141] libmachine: (multinode-627820-m02) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 15:02:40.925751  844608 main.go:141] libmachine: (multinode-627820-m02) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 15:02:40.925760  844608 main.go:141] libmachine: (multinode-627820-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 15:02:40.925776  844608 main.go:141] libmachine: (multinode-627820-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 15:02:40.925797  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Checking permissions on dir: /home/jenkins
	I1114 15:02:40.925811  844608 main.go:141] libmachine: (multinode-627820-m02) Creating domain...
	I1114 15:02:40.925819  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Checking permissions on dir: /home
	I1114 15:02:40.925833  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Skipping /home - not owner
	I1114 15:02:40.926748  844608 main.go:141] libmachine: (multinode-627820-m02) define libvirt domain using xml: 
	I1114 15:02:40.926773  844608 main.go:141] libmachine: (multinode-627820-m02) <domain type='kvm'>
	I1114 15:02:40.926781  844608 main.go:141] libmachine: (multinode-627820-m02)   <name>multinode-627820-m02</name>
	I1114 15:02:40.926794  844608 main.go:141] libmachine: (multinode-627820-m02)   <memory unit='MiB'>2200</memory>
	I1114 15:02:40.926808  844608 main.go:141] libmachine: (multinode-627820-m02)   <vcpu>2</vcpu>
	I1114 15:02:40.926823  844608 main.go:141] libmachine: (multinode-627820-m02)   <features>
	I1114 15:02:40.926830  844608 main.go:141] libmachine: (multinode-627820-m02)     <acpi/>
	I1114 15:02:40.926837  844608 main.go:141] libmachine: (multinode-627820-m02)     <apic/>
	I1114 15:02:40.926843  844608 main.go:141] libmachine: (multinode-627820-m02)     <pae/>
	I1114 15:02:40.926851  844608 main.go:141] libmachine: (multinode-627820-m02)     
	I1114 15:02:40.926876  844608 main.go:141] libmachine: (multinode-627820-m02)   </features>
	I1114 15:02:40.926895  844608 main.go:141] libmachine: (multinode-627820-m02)   <cpu mode='host-passthrough'>
	I1114 15:02:40.926902  844608 main.go:141] libmachine: (multinode-627820-m02)   
	I1114 15:02:40.926909  844608 main.go:141] libmachine: (multinode-627820-m02)   </cpu>
	I1114 15:02:40.926919  844608 main.go:141] libmachine: (multinode-627820-m02)   <os>
	I1114 15:02:40.926925  844608 main.go:141] libmachine: (multinode-627820-m02)     <type>hvm</type>
	I1114 15:02:40.926934  844608 main.go:141] libmachine: (multinode-627820-m02)     <boot dev='cdrom'/>
	I1114 15:02:40.926943  844608 main.go:141] libmachine: (multinode-627820-m02)     <boot dev='hd'/>
	I1114 15:02:40.926954  844608 main.go:141] libmachine: (multinode-627820-m02)     <bootmenu enable='no'/>
	I1114 15:02:40.926962  844608 main.go:141] libmachine: (multinode-627820-m02)   </os>
	I1114 15:02:40.926970  844608 main.go:141] libmachine: (multinode-627820-m02)   <devices>
	I1114 15:02:40.926978  844608 main.go:141] libmachine: (multinode-627820-m02)     <disk type='file' device='cdrom'>
	I1114 15:02:40.927010  844608 main.go:141] libmachine: (multinode-627820-m02)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/boot2docker.iso'/>
	I1114 15:02:40.927036  844608 main.go:141] libmachine: (multinode-627820-m02)       <target dev='hdc' bus='scsi'/>
	I1114 15:02:40.927048  844608 main.go:141] libmachine: (multinode-627820-m02)       <readonly/>
	I1114 15:02:40.927057  844608 main.go:141] libmachine: (multinode-627820-m02)     </disk>
	I1114 15:02:40.927068  844608 main.go:141] libmachine: (multinode-627820-m02)     <disk type='file' device='disk'>
	I1114 15:02:40.927075  844608 main.go:141] libmachine: (multinode-627820-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 15:02:40.927084  844608 main.go:141] libmachine: (multinode-627820-m02)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/multinode-627820-m02.rawdisk'/>
	I1114 15:02:40.927094  844608 main.go:141] libmachine: (multinode-627820-m02)       <target dev='hda' bus='virtio'/>
	I1114 15:02:40.927123  844608 main.go:141] libmachine: (multinode-627820-m02)     </disk>
	I1114 15:02:40.927143  844608 main.go:141] libmachine: (multinode-627820-m02)     <interface type='network'>
	I1114 15:02:40.927153  844608 main.go:141] libmachine: (multinode-627820-m02)       <source network='mk-multinode-627820'/>
	I1114 15:02:40.927160  844608 main.go:141] libmachine: (multinode-627820-m02)       <model type='virtio'/>
	I1114 15:02:40.927167  844608 main.go:141] libmachine: (multinode-627820-m02)     </interface>
	I1114 15:02:40.927174  844608 main.go:141] libmachine: (multinode-627820-m02)     <interface type='network'>
	I1114 15:02:40.927185  844608 main.go:141] libmachine: (multinode-627820-m02)       <source network='default'/>
	I1114 15:02:40.927194  844608 main.go:141] libmachine: (multinode-627820-m02)       <model type='virtio'/>
	I1114 15:02:40.927201  844608 main.go:141] libmachine: (multinode-627820-m02)     </interface>
	I1114 15:02:40.927212  844608 main.go:141] libmachine: (multinode-627820-m02)     <serial type='pty'>
	I1114 15:02:40.927238  844608 main.go:141] libmachine: (multinode-627820-m02)       <target port='0'/>
	I1114 15:02:40.927257  844608 main.go:141] libmachine: (multinode-627820-m02)     </serial>
	I1114 15:02:40.927275  844608 main.go:141] libmachine: (multinode-627820-m02)     <console type='pty'>
	I1114 15:02:40.927296  844608 main.go:141] libmachine: (multinode-627820-m02)       <target type='serial' port='0'/>
	I1114 15:02:40.927310  844608 main.go:141] libmachine: (multinode-627820-m02)     </console>
	I1114 15:02:40.927323  844608 main.go:141] libmachine: (multinode-627820-m02)     <rng model='virtio'>
	I1114 15:02:40.927337  844608 main.go:141] libmachine: (multinode-627820-m02)       <backend model='random'>/dev/random</backend>
	I1114 15:02:40.927349  844608 main.go:141] libmachine: (multinode-627820-m02)     </rng>
	I1114 15:02:40.927361  844608 main.go:141] libmachine: (multinode-627820-m02)     
	I1114 15:02:40.927373  844608 main.go:141] libmachine: (multinode-627820-m02)     
	I1114 15:02:40.927395  844608 main.go:141] libmachine: (multinode-627820-m02)   </devices>
	I1114 15:02:40.927412  844608 main.go:141] libmachine: (multinode-627820-m02) </domain>
	I1114 15:02:40.927441  844608 main.go:141] libmachine: (multinode-627820-m02) 
	I1114 15:02:40.934512  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:a9:94:7e in network default
	I1114 15:02:40.935184  844608 main.go:141] libmachine: (multinode-627820-m02) Ensuring networks are active...
	I1114 15:02:40.935211  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:40.935870  844608 main.go:141] libmachine: (multinode-627820-m02) Ensuring network default is active
	I1114 15:02:40.936296  844608 main.go:141] libmachine: (multinode-627820-m02) Ensuring network mk-multinode-627820 is active
	I1114 15:02:40.936715  844608 main.go:141] libmachine: (multinode-627820-m02) Getting domain xml...
	I1114 15:02:40.937457  844608 main.go:141] libmachine: (multinode-627820-m02) Creating domain...
	I1114 15:02:42.186163  844608 main.go:141] libmachine: (multinode-627820-m02) Waiting to get IP...
	I1114 15:02:42.187142  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:42.187596  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:42.187646  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:42.187582  844983 retry.go:31] will retry after 238.13076ms: waiting for machine to come up
	I1114 15:02:42.427103  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:42.427604  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:42.427635  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:42.427546  844983 retry.go:31] will retry after 341.196022ms: waiting for machine to come up
	I1114 15:02:42.769942  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:42.770442  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:42.770470  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:42.770389  844983 retry.go:31] will retry after 416.874681ms: waiting for machine to come up
	I1114 15:02:43.189136  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:43.189567  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:43.189598  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:43.189521  844983 retry.go:31] will retry after 580.15294ms: waiting for machine to come up
	I1114 15:02:43.771192  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:43.771690  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:43.771715  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:43.771647  844983 retry.go:31] will retry after 567.072835ms: waiting for machine to come up
	I1114 15:02:44.340941  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:44.341385  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:44.341420  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:44.341334  844983 retry.go:31] will retry after 945.193534ms: waiting for machine to come up
	I1114 15:02:45.288534  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:45.289038  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:45.289066  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:45.288979  844983 retry.go:31] will retry after 723.50394ms: waiting for machine to come up
	I1114 15:02:46.014457  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:46.014904  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:46.014931  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:46.014851  844983 retry.go:31] will retry after 1.475444437s: waiting for machine to come up
	I1114 15:02:47.491881  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:47.492404  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:47.492431  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:47.492317  844983 retry.go:31] will retry after 1.163228816s: waiting for machine to come up
	I1114 15:02:48.657552  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:48.657909  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:48.657935  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:48.657860  844983 retry.go:31] will retry after 2.232165902s: waiting for machine to come up
	I1114 15:02:50.891880  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:50.892450  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:50.892487  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:50.892390  844983 retry.go:31] will retry after 2.404175725s: waiting for machine to come up
	I1114 15:02:53.300210  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:53.300552  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:53.300578  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:53.300498  844983 retry.go:31] will retry after 2.568235588s: waiting for machine to come up
	I1114 15:02:55.870217  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:55.870642  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:55.870676  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:55.870579  844983 retry.go:31] will retry after 3.959478574s: waiting for machine to come up
	I1114 15:02:59.833870  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:02:59.834311  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find current IP address of domain multinode-627820-m02 in network mk-multinode-627820
	I1114 15:02:59.834337  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | I1114 15:02:59.834266  844983 retry.go:31] will retry after 4.899749579s: waiting for machine to come up
	I1114 15:03:04.738368  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:04.738842  844608 main.go:141] libmachine: (multinode-627820-m02) Found IP for machine: 192.168.39.38
	I1114 15:03:04.738870  844608 main.go:141] libmachine: (multinode-627820-m02) Reserving static IP address...
	I1114 15:03:04.738888  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has current primary IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:04.739407  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | unable to find host DHCP lease matching {name: "multinode-627820-m02", mac: "52:54:00:69:21:cd", ip: "192.168.39.38"} in network mk-multinode-627820
	I1114 15:03:04.814651  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Getting to WaitForSSH function...
	I1114 15:03:04.814689  844608 main.go:141] libmachine: (multinode-627820-m02) Reserved static IP address: 192.168.39.38
	I1114 15:03:04.814741  844608 main.go:141] libmachine: (multinode-627820-m02) Waiting for SSH to be available...
	I1114 15:03:04.817488  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:04.818125  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:04.818160  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:04.818332  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Using SSH client type: external
	I1114 15:03:04.818363  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa (-rw-------)
	I1114 15:03:04.818410  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:03:04.818437  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | About to run SSH command:
	I1114 15:03:04.818455  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | exit 0
	I1114 15:03:04.912369  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | SSH cmd err, output: <nil>: 
	I1114 15:03:04.912676  844608 main.go:141] libmachine: (multinode-627820-m02) KVM machine creation complete!
	I1114 15:03:04.912972  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetConfigRaw
	I1114 15:03:04.913550  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:03:04.913767  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:03:04.913998  844608 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 15:03:04.914016  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetState
	I1114 15:03:04.915455  844608 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 15:03:04.915474  844608 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 15:03:04.915483  844608 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 15:03:04.915492  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:04.917842  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:04.918226  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:04.918251  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:04.918394  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:04.918614  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:04.918805  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:04.918978  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:04.919165  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:03:04.919536  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:03:04.919550  844608 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 15:03:05.052155  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:03:05.052180  844608 main.go:141] libmachine: Detecting the provisioner...
	I1114 15:03:05.052189  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:05.055014  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.055457  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:05.055488  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.055613  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:05.055878  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.056043  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.056259  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:05.056472  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:03:05.056837  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:03:05.056856  844608 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 15:03:05.189538  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 15:03:05.189619  844608 main.go:141] libmachine: found compatible host: buildroot
	I1114 15:03:05.189632  844608 main.go:141] libmachine: Provisioning with buildroot...
	I1114 15:03:05.189641  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetMachineName
	I1114 15:03:05.189935  844608 buildroot.go:166] provisioning hostname "multinode-627820-m02"
	I1114 15:03:05.189964  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetMachineName
	I1114 15:03:05.190199  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:05.193466  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.193905  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:05.193938  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.194046  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:05.194283  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.194452  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.194657  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:05.194850  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:03:05.195233  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:03:05.195255  844608 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-627820-m02 && echo "multinode-627820-m02" | sudo tee /etc/hostname
	I1114 15:03:05.337933  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-627820-m02
	
	I1114 15:03:05.337964  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:05.341122  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.341515  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:05.341541  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.341712  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:05.341919  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.342096  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.342262  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:05.342449  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:03:05.342764  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:03:05.342801  844608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-627820-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-627820-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-627820-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:03:05.482491  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:03:05.482527  844608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:03:05.482549  844608 buildroot.go:174] setting up certificates
	I1114 15:03:05.482560  844608 provision.go:83] configureAuth start
	I1114 15:03:05.482574  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetMachineName
	I1114 15:03:05.482918  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetIP
	I1114 15:03:05.485890  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.486297  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:05.486326  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.486544  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:05.489141  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.489543  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:05.489570  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.489736  844608 provision.go:138] copyHostCerts
	I1114 15:03:05.489773  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:03:05.489812  844608 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:03:05.489821  844608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:03:05.489886  844608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:03:05.489961  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:03:05.489978  844608 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:03:05.489985  844608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:03:05.490008  844608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:03:05.490050  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:03:05.490066  844608 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:03:05.490072  844608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:03:05.490092  844608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:03:05.490136  844608 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.multinode-627820-m02 san=[192.168.39.38 192.168.39.38 localhost 127.0.0.1 minikube multinode-627820-m02]
	I1114 15:03:05.784689  844608 provision.go:172] copyRemoteCerts
	I1114 15:03:05.784792  844608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:03:05.784841  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:05.788014  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.788463  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:05.788498  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.788660  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:05.788930  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.789104  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:05.789259  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:03:05.883199  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 15:03:05.883285  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:03:05.906273  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 15:03:05.906334  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1114 15:03:05.927520  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 15:03:05.927589  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:03:05.949967  844608 provision.go:86] duration metric: configureAuth took 467.38351ms
	I1114 15:03:05.950056  844608 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:03:05.950256  844608 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:03:05.950333  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:05.953441  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.953871  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:05.953909  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:05.954071  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:05.954305  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.954470  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:05.954639  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:05.954802  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:03:05.955146  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:03:05.955164  844608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:03:06.278209  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:03:06.278239  844608 main.go:141] libmachine: Checking connection to Docker...
	I1114 15:03:06.278248  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetURL
	I1114 15:03:06.279707  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | Using libvirt version 6000000
	I1114 15:03:06.282089  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.282517  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:06.282544  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.282760  844608 main.go:141] libmachine: Docker is up and running!
	I1114 15:03:06.282776  844608 main.go:141] libmachine: Reticulating splines...
	I1114 15:03:06.282783  844608 client.go:171] LocalClient.Create took 25.838398703s
	I1114 15:03:06.282815  844608 start.go:167] duration metric: libmachine.API.Create for "multinode-627820" took 25.838477125s
	I1114 15:03:06.282827  844608 start.go:300] post-start starting for "multinode-627820-m02" (driver="kvm2")
	I1114 15:03:06.282836  844608 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:03:06.282851  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:03:06.283105  844608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:03:06.283138  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:06.285063  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.285438  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:06.285471  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.285707  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:06.285945  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:06.286102  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:06.286290  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:03:06.378114  844608 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:03:06.382398  844608 command_runner.go:130] > NAME=Buildroot
	I1114 15:03:06.382447  844608 command_runner.go:130] > VERSION=2021.02.12-1-g9cb9327-dirty
	I1114 15:03:06.382455  844608 command_runner.go:130] > ID=buildroot
	I1114 15:03:06.382466  844608 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 15:03:06.382473  844608 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 15:03:06.382515  844608 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:03:06.382532  844608 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:03:06.382688  844608 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:03:06.382819  844608 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:03:06.382846  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /etc/ssl/certs/8322112.pem
	I1114 15:03:06.382978  844608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:03:06.391572  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:03:06.413827  844608 start.go:303] post-start completed in 130.988193ms
	I1114 15:03:06.413887  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetConfigRaw
	I1114 15:03:06.414526  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetIP
	I1114 15:03:06.417992  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.418470  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:06.418502  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.418821  844608 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:03:06.419030  844608 start.go:128] duration metric: createHost completed in 25.992621091s
	I1114 15:03:06.419060  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:06.421503  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.421808  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:06.421844  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.422020  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:06.422320  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:06.422526  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:06.422697  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:06.422913  844608 main.go:141] libmachine: Using SSH client type: native
	I1114 15:03:06.423286  844608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:03:06.423299  844608 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:03:06.553445  844608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699974186.525663798
	
	I1114 15:03:06.553469  844608 fix.go:206] guest clock: 1699974186.525663798
	I1114 15:03:06.553476  844608 fix.go:219] Guest: 2023-11-14 15:03:06.525663798 +0000 UTC Remote: 2023-11-14 15:03:06.419044273 +0000 UTC m=+91.806522806 (delta=106.619525ms)
	I1114 15:03:06.553493  844608 fix.go:190] guest clock delta is within tolerance: 106.619525ms
	I1114 15:03:06.553498  844608 start.go:83] releasing machines lock for "multinode-627820-m02", held for 26.127193824s
	I1114 15:03:06.553517  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:03:06.553849  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetIP
	I1114 15:03:06.556943  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.557370  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:06.557394  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.559323  844608 out.go:177] * Found network options:
	I1114 15:03:06.560714  844608 out.go:177]   - NO_PROXY=192.168.39.63
	W1114 15:03:06.562140  844608 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 15:03:06.562188  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:03:06.562737  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:03:06.562932  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:03:06.563038  844608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:03:06.563102  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	W1114 15:03:06.563126  844608 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 15:03:06.563195  844608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:03:06.563218  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:03:06.565765  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.566130  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.566233  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:06.566260  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.566410  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:06.566507  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:06.566539  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:06.566599  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:06.566775  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:03:06.566782  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:06.566974  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:03:06.566971  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:03:06.567135  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:03:06.567308  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:03:06.809371  844608 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 15:03:06.809479  844608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 15:03:06.815258  844608 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 15:03:06.815475  844608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:03:06.815553  844608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:03:06.830037  844608 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1114 15:03:06.830083  844608 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:03:06.830094  844608 start.go:472] detecting cgroup driver to use...
	I1114 15:03:06.830171  844608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:03:06.844065  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:03:06.857436  844608 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:03:06.857486  844608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:03:06.870318  844608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:03:06.883732  844608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:03:06.897334  844608 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1114 15:03:06.991934  844608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:03:07.116109  844608 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1114 15:03:07.116221  844608 docker.go:219] disabling docker service ...
	I1114 15:03:07.116288  844608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:03:07.131888  844608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:03:07.142848  844608 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1114 15:03:07.143438  844608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:03:07.156635  844608 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1114 15:03:07.262772  844608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:03:07.372037  844608 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1114 15:03:07.372075  844608 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1114 15:03:07.372142  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:03:07.384491  844608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:03:07.400861  844608 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1114 15:03:07.401327  844608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:03:07.401396  844608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:03:07.410685  844608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:03:07.410760  844608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:03:07.419693  844608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:03:07.428919  844608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:03:07.437972  844608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:03:07.446778  844608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:03:07.454187  844608 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:03:07.454237  844608 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:03:07.454290  844608 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:03:07.465865  844608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:03:07.474622  844608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:03:07.587935  844608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:03:07.754399  844608 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:03:07.754498  844608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:03:07.759019  844608 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1114 15:03:07.759047  844608 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 15:03:07.759056  844608 command_runner.go:130] > Device: 16h/22d	Inode: 724         Links: 1
	I1114 15:03:07.759063  844608 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:03:07.759068  844608 command_runner.go:130] > Access: 2023-11-14 15:03:07.715652280 +0000
	I1114 15:03:07.759074  844608 command_runner.go:130] > Modify: 2023-11-14 15:03:07.715652280 +0000
	I1114 15:03:07.759082  844608 command_runner.go:130] > Change: 2023-11-14 15:03:07.715652280 +0000
	I1114 15:03:07.759090  844608 command_runner.go:130] >  Birth: -
	I1114 15:03:07.759228  844608 start.go:540] Will wait 60s for crictl version
	I1114 15:03:07.759357  844608 ssh_runner.go:195] Run: which crictl
	I1114 15:03:07.762986  844608 command_runner.go:130] > /usr/bin/crictl
	I1114 15:03:07.763182  844608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:03:07.802606  844608 command_runner.go:130] > Version:  0.1.0
	I1114 15:03:07.802632  844608 command_runner.go:130] > RuntimeName:  cri-o
	I1114 15:03:07.802665  844608 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1114 15:03:07.802672  844608 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 15:03:07.804325  844608 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:03:07.804410  844608 ssh_runner.go:195] Run: crio --version
	I1114 15:03:07.854486  844608 command_runner.go:130] > crio version 1.24.1
	I1114 15:03:07.854517  844608 command_runner.go:130] > Version:          1.24.1
	I1114 15:03:07.854537  844608 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:03:07.854545  844608 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:03:07.854554  844608 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:03:07.854561  844608 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:03:07.854567  844608 command_runner.go:130] > Compiler:         gc
	I1114 15:03:07.854575  844608 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:03:07.854583  844608 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:03:07.854594  844608 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:03:07.854601  844608 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:03:07.854606  844608 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:03:07.854698  844608 ssh_runner.go:195] Run: crio --version
	I1114 15:03:07.901542  844608 command_runner.go:130] > crio version 1.24.1
	I1114 15:03:07.901570  844608 command_runner.go:130] > Version:          1.24.1
	I1114 15:03:07.901579  844608 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:03:07.901586  844608 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:03:07.901611  844608 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:03:07.901622  844608 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:03:07.901630  844608 command_runner.go:130] > Compiler:         gc
	I1114 15:03:07.901638  844608 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:03:07.901650  844608 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:03:07.901661  844608 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:03:07.901671  844608 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:03:07.901680  844608 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:03:07.904616  844608 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:03:07.906002  844608 out.go:177]   - env NO_PROXY=192.168.39.63
	I1114 15:03:07.907302  844608 main.go:141] libmachine: (multinode-627820-m02) Calling .GetIP
	I1114 15:03:07.910010  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:07.910468  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:03:07.910491  844608 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:03:07.910711  844608 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:03:07.914590  844608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:03:07.927097  844608 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820 for IP: 192.168.39.38
	I1114 15:03:07.927127  844608 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:03:07.927293  844608 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:03:07.927354  844608 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:03:07.927374  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 15:03:07.927390  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 15:03:07.927408  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 15:03:07.927424  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 15:03:07.927479  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:03:07.927510  844608 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:03:07.927520  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:03:07.927543  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:03:07.927566  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:03:07.927587  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:03:07.927626  844608 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:03:07.927663  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:03:07.927683  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem -> /usr/share/ca-certificates/832211.pem
	I1114 15:03:07.927701  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /usr/share/ca-certificates/8322112.pem
	I1114 15:03:07.928057  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:03:07.950271  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:03:07.971958  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:03:07.995626  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:03:08.018266  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:03:08.040892  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:03:08.064315  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:03:08.087306  844608 ssh_runner.go:195] Run: openssl version
	I1114 15:03:08.092719  844608 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1114 15:03:08.093107  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:03:08.102822  844608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:03:08.107466  844608 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:03:08.107498  844608 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:03:08.107550  844608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:03:08.112943  844608 command_runner.go:130] > b5213941
	I1114 15:03:08.113145  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:03:08.121926  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:03:08.130747  844608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:03:08.135142  844608 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:03:08.135270  844608 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:03:08.135324  844608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:03:08.140274  844608 command_runner.go:130] > 51391683
	I1114 15:03:08.140602  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:03:08.149373  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:03:08.158123  844608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:03:08.162181  844608 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:03:08.162235  844608 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:03:08.162278  844608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:03:08.167064  844608 command_runner.go:130] > 3ec20f2e
	I1114 15:03:08.167176  844608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:03:08.175734  844608 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:03:08.179784  844608 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:03:08.179817  844608 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:03:08.179886  844608 ssh_runner.go:195] Run: crio config
	I1114 15:03:08.237665  844608 command_runner.go:130] ! time="2023-11-14 15:03:08.212798525Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1114 15:03:08.237716  844608 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1114 15:03:08.248015  844608 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1114 15:03:08.248042  844608 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1114 15:03:08.248049  844608 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1114 15:03:08.248053  844608 command_runner.go:130] > #
	I1114 15:03:08.248059  844608 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1114 15:03:08.248066  844608 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1114 15:03:08.248072  844608 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1114 15:03:08.248082  844608 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1114 15:03:08.248088  844608 command_runner.go:130] > # reload'.
	I1114 15:03:08.248095  844608 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1114 15:03:08.248110  844608 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1114 15:03:08.248123  844608 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1114 15:03:08.248136  844608 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1114 15:03:08.248147  844608 command_runner.go:130] > [crio]
	I1114 15:03:08.248165  844608 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1114 15:03:08.248177  844608 command_runner.go:130] > # containers images, in this directory.
	I1114 15:03:08.248189  844608 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1114 15:03:08.248207  844608 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1114 15:03:08.248220  844608 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1114 15:03:08.248233  844608 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1114 15:03:08.248243  844608 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1114 15:03:08.248251  844608 command_runner.go:130] > storage_driver = "overlay"
	I1114 15:03:08.248257  844608 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1114 15:03:08.248265  844608 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1114 15:03:08.248275  844608 command_runner.go:130] > storage_option = [
	I1114 15:03:08.248289  844608 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1114 15:03:08.248300  844608 command_runner.go:130] > ]
	I1114 15:03:08.248315  844608 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1114 15:03:08.248331  844608 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1114 15:03:08.248341  844608 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1114 15:03:08.248351  844608 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1114 15:03:08.248361  844608 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1114 15:03:08.248374  844608 command_runner.go:130] > # always happen on a node reboot
	I1114 15:03:08.248387  844608 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1114 15:03:08.248402  844608 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1114 15:03:08.248416  844608 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1114 15:03:08.248432  844608 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1114 15:03:08.248443  844608 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1114 15:03:08.248458  844608 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1114 15:03:08.248472  844608 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1114 15:03:08.248484  844608 command_runner.go:130] > # internal_wipe = true
	I1114 15:03:08.248498  844608 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1114 15:03:08.248514  844608 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1114 15:03:08.248528  844608 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1114 15:03:08.248539  844608 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1114 15:03:08.248553  844608 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1114 15:03:08.248565  844608 command_runner.go:130] > [crio.api]
	I1114 15:03:08.248578  844608 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1114 15:03:08.248593  844608 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1114 15:03:08.248607  844608 command_runner.go:130] > # IP address on which the stream server will listen.
	I1114 15:03:08.248620  844608 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1114 15:03:08.248628  844608 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1114 15:03:08.248642  844608 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1114 15:03:08.248654  844608 command_runner.go:130] > # stream_port = "0"
	I1114 15:03:08.248668  844608 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1114 15:03:08.248680  844608 command_runner.go:130] > # stream_enable_tls = false
	I1114 15:03:08.248695  844608 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1114 15:03:08.248707  844608 command_runner.go:130] > # stream_idle_timeout = ""
	I1114 15:03:08.248718  844608 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1114 15:03:08.248734  844608 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1114 15:03:08.248756  844608 command_runner.go:130] > # minutes.
	I1114 15:03:08.248768  844608 command_runner.go:130] > # stream_tls_cert = ""
	I1114 15:03:08.248783  844608 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1114 15:03:08.248798  844608 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1114 15:03:08.248809  844608 command_runner.go:130] > # stream_tls_key = ""
	I1114 15:03:08.248822  844608 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1114 15:03:08.248837  844608 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1114 15:03:08.248851  844608 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1114 15:03:08.248863  844608 command_runner.go:130] > # stream_tls_ca = ""
	I1114 15:03:08.248879  844608 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:03:08.248889  844608 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1114 15:03:08.248901  844608 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:03:08.248921  844608 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1114 15:03:08.248947  844608 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1114 15:03:08.248963  844608 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1114 15:03:08.248972  844608 command_runner.go:130] > [crio.runtime]
	I1114 15:03:08.248982  844608 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1114 15:03:08.248996  844608 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1114 15:03:08.249008  844608 command_runner.go:130] > # "nofile=1024:2048"
	I1114 15:03:08.249023  844608 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1114 15:03:08.249034  844608 command_runner.go:130] > # default_ulimits = [
	I1114 15:03:08.249041  844608 command_runner.go:130] > # ]
	I1114 15:03:08.249056  844608 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1114 15:03:08.249065  844608 command_runner.go:130] > # no_pivot = false
	I1114 15:03:08.249074  844608 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1114 15:03:08.249090  844608 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1114 15:03:08.249104  844608 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1114 15:03:08.249119  844608 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1114 15:03:08.249132  844608 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1114 15:03:08.249146  844608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:03:08.249156  844608 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1114 15:03:08.249167  844608 command_runner.go:130] > # Cgroup setting for conmon
	I1114 15:03:08.249183  844608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1114 15:03:08.249194  844608 command_runner.go:130] > conmon_cgroup = "pod"
	I1114 15:03:08.249205  844608 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1114 15:03:08.249218  844608 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1114 15:03:08.249233  844608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:03:08.249243  844608 command_runner.go:130] > conmon_env = [
	I1114 15:03:08.249258  844608 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1114 15:03:08.249268  844608 command_runner.go:130] > ]
	I1114 15:03:08.249278  844608 command_runner.go:130] > # Additional environment variables to set for all the
	I1114 15:03:08.249290  844608 command_runner.go:130] > # containers. These are overridden if set in the
	I1114 15:03:08.249304  844608 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1114 15:03:08.249315  844608 command_runner.go:130] > # default_env = [
	I1114 15:03:08.249325  844608 command_runner.go:130] > # ]
	I1114 15:03:08.249335  844608 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1114 15:03:08.249346  844608 command_runner.go:130] > # selinux = false
	I1114 15:03:08.249362  844608 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1114 15:03:08.249373  844608 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1114 15:03:08.249388  844608 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1114 15:03:08.249399  844608 command_runner.go:130] > # seccomp_profile = ""
	I1114 15:03:08.249413  844608 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1114 15:03:08.249427  844608 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1114 15:03:08.249437  844608 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1114 15:03:08.249442  844608 command_runner.go:130] > # which might increase security.
	I1114 15:03:08.249449  844608 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1114 15:03:08.249456  844608 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1114 15:03:08.249463  844608 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1114 15:03:08.249469  844608 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1114 15:03:08.249475  844608 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1114 15:03:08.249483  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:03:08.249491  844608 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1114 15:03:08.249498  844608 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1114 15:03:08.249505  844608 command_runner.go:130] > # the cgroup blockio controller.
	I1114 15:03:08.249510  844608 command_runner.go:130] > # blockio_config_file = ""
	I1114 15:03:08.249519  844608 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1114 15:03:08.249525  844608 command_runner.go:130] > # irqbalance daemon.
	I1114 15:03:08.249531  844608 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1114 15:03:08.249540  844608 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1114 15:03:08.249548  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:03:08.249553  844608 command_runner.go:130] > # rdt_config_file = ""
	I1114 15:03:08.249561  844608 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1114 15:03:08.249565  844608 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1114 15:03:08.249574  844608 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1114 15:03:08.249581  844608 command_runner.go:130] > # separate_pull_cgroup = ""
	I1114 15:03:08.249588  844608 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1114 15:03:08.249596  844608 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1114 15:03:08.249603  844608 command_runner.go:130] > # will be added.
	I1114 15:03:08.249607  844608 command_runner.go:130] > # default_capabilities = [
	I1114 15:03:08.249614  844608 command_runner.go:130] > # 	"CHOWN",
	I1114 15:03:08.249618  844608 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1114 15:03:08.249625  844608 command_runner.go:130] > # 	"FSETID",
	I1114 15:03:08.249629  844608 command_runner.go:130] > # 	"FOWNER",
	I1114 15:03:08.249635  844608 command_runner.go:130] > # 	"SETGID",
	I1114 15:03:08.249640  844608 command_runner.go:130] > # 	"SETUID",
	I1114 15:03:08.249646  844608 command_runner.go:130] > # 	"SETPCAP",
	I1114 15:03:08.249651  844608 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1114 15:03:08.249657  844608 command_runner.go:130] > # 	"KILL",
	I1114 15:03:08.249661  844608 command_runner.go:130] > # ]
	I1114 15:03:08.249670  844608 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1114 15:03:08.249678  844608 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:03:08.249685  844608 command_runner.go:130] > # default_sysctls = [
	I1114 15:03:08.249689  844608 command_runner.go:130] > # ]
	I1114 15:03:08.249696  844608 command_runner.go:130] > # List of devices on the host that a
	I1114 15:03:08.249702  844608 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1114 15:03:08.249709  844608 command_runner.go:130] > # allowed_devices = [
	I1114 15:03:08.249713  844608 command_runner.go:130] > # 	"/dev/fuse",
	I1114 15:03:08.249719  844608 command_runner.go:130] > # ]
	I1114 15:03:08.249726  844608 command_runner.go:130] > # List of additional devices. specified as
	I1114 15:03:08.249736  844608 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1114 15:03:08.249744  844608 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1114 15:03:08.249773  844608 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:03:08.249785  844608 command_runner.go:130] > # additional_devices = [
	I1114 15:03:08.249789  844608 command_runner.go:130] > # ]
	I1114 15:03:08.249794  844608 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1114 15:03:08.249798  844608 command_runner.go:130] > # cdi_spec_dirs = [
	I1114 15:03:08.249804  844608 command_runner.go:130] > # 	"/etc/cdi",
	I1114 15:03:08.249809  844608 command_runner.go:130] > # 	"/var/run/cdi",
	I1114 15:03:08.249812  844608 command_runner.go:130] > # ]
	I1114 15:03:08.249818  844608 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1114 15:03:08.249827  844608 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1114 15:03:08.249833  844608 command_runner.go:130] > # Defaults to false.
	I1114 15:03:08.249839  844608 command_runner.go:130] > # device_ownership_from_security_context = false
	I1114 15:03:08.249847  844608 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1114 15:03:08.249856  844608 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1114 15:03:08.249863  844608 command_runner.go:130] > # hooks_dir = [
	I1114 15:03:08.249870  844608 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1114 15:03:08.249876  844608 command_runner.go:130] > # ]
	I1114 15:03:08.249883  844608 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1114 15:03:08.249891  844608 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1114 15:03:08.249899  844608 command_runner.go:130] > # its default mounts from the following two files:
	I1114 15:03:08.249904  844608 command_runner.go:130] > #
	I1114 15:03:08.249910  844608 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1114 15:03:08.249923  844608 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1114 15:03:08.249931  844608 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1114 15:03:08.249937  844608 command_runner.go:130] > #
	I1114 15:03:08.249943  844608 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1114 15:03:08.249952  844608 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1114 15:03:08.249961  844608 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1114 15:03:08.249969  844608 command_runner.go:130] > #      only add mounts it finds in this file.
	I1114 15:03:08.249973  844608 command_runner.go:130] > #
	I1114 15:03:08.249980  844608 command_runner.go:130] > # default_mounts_file = ""
	I1114 15:03:08.249986  844608 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1114 15:03:08.249995  844608 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1114 15:03:08.250003  844608 command_runner.go:130] > pids_limit = 1024
	I1114 15:03:08.250012  844608 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1114 15:03:08.250021  844608 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1114 15:03:08.250028  844608 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1114 15:03:08.250038  844608 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1114 15:03:08.250044  844608 command_runner.go:130] > # log_size_max = -1
	I1114 15:03:08.250051  844608 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1114 15:03:08.250058  844608 command_runner.go:130] > # log_to_journald = false
	I1114 15:03:08.250064  844608 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1114 15:03:08.250072  844608 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1114 15:03:08.250077  844608 command_runner.go:130] > # Path to directory for container attach sockets.
	I1114 15:03:08.250083  844608 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1114 15:03:08.250091  844608 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1114 15:03:08.250098  844608 command_runner.go:130] > # bind_mount_prefix = ""
	I1114 15:03:08.250104  844608 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1114 15:03:08.250110  844608 command_runner.go:130] > # read_only = false
	I1114 15:03:08.250116  844608 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1114 15:03:08.250125  844608 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1114 15:03:08.250132  844608 command_runner.go:130] > # live configuration reload.
	I1114 15:03:08.250137  844608 command_runner.go:130] > # log_level = "info"
	I1114 15:03:08.250145  844608 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1114 15:03:08.250152  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:03:08.250157  844608 command_runner.go:130] > # log_filter = ""
	I1114 15:03:08.250163  844608 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1114 15:03:08.250172  844608 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1114 15:03:08.250181  844608 command_runner.go:130] > # separated by comma.
	I1114 15:03:08.250192  844608 command_runner.go:130] > # uid_mappings = ""
	I1114 15:03:08.250206  844608 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1114 15:03:08.250221  844608 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1114 15:03:08.250233  844608 command_runner.go:130] > # separated by comma.
	I1114 15:03:08.250243  844608 command_runner.go:130] > # gid_mappings = ""
	I1114 15:03:08.250257  844608 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1114 15:03:08.250271  844608 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:03:08.250284  844608 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:03:08.250293  844608 command_runner.go:130] > # minimum_mappable_uid = -1
	I1114 15:03:08.250302  844608 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1114 15:03:08.250312  844608 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:03:08.250320  844608 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:03:08.250327  844608 command_runner.go:130] > # minimum_mappable_gid = -1
	I1114 15:03:08.250333  844608 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1114 15:03:08.250342  844608 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1114 15:03:08.250351  844608 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1114 15:03:08.250357  844608 command_runner.go:130] > # ctr_stop_timeout = 30
	I1114 15:03:08.250364  844608 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1114 15:03:08.250372  844608 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1114 15:03:08.250379  844608 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1114 15:03:08.250387  844608 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1114 15:03:08.250394  844608 command_runner.go:130] > drop_infra_ctr = false
	I1114 15:03:08.250404  844608 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1114 15:03:08.250412  844608 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1114 15:03:08.250479  844608 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1114 15:03:08.250507  844608 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1114 15:03:08.250517  844608 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1114 15:03:08.250522  844608 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1114 15:03:08.250529  844608 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1114 15:03:08.250537  844608 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1114 15:03:08.250543  844608 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1114 15:03:08.250550  844608 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1114 15:03:08.250558  844608 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1114 15:03:08.250567  844608 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1114 15:03:08.250574  844608 command_runner.go:130] > # default_runtime = "runc"
	I1114 15:03:08.250579  844608 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1114 15:03:08.250589  844608 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1114 15:03:08.250600  844608 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1114 15:03:08.250608  844608 command_runner.go:130] > # creation as a file is not desired either.
	I1114 15:03:08.250618  844608 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1114 15:03:08.250625  844608 command_runner.go:130] > # the hostname is being managed dynamically.
	I1114 15:03:08.250630  844608 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1114 15:03:08.250636  844608 command_runner.go:130] > # ]
	I1114 15:03:08.250642  844608 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1114 15:03:08.250651  844608 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1114 15:03:08.250659  844608 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1114 15:03:08.250671  844608 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1114 15:03:08.250675  844608 command_runner.go:130] > #
	I1114 15:03:08.250680  844608 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1114 15:03:08.250688  844608 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1114 15:03:08.250692  844608 command_runner.go:130] > #  runtime_type = "oci"
	I1114 15:03:08.250699  844608 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1114 15:03:08.250704  844608 command_runner.go:130] > #  privileged_without_host_devices = false
	I1114 15:03:08.250710  844608 command_runner.go:130] > #  allowed_annotations = []
	I1114 15:03:08.250714  844608 command_runner.go:130] > # Where:
	I1114 15:03:08.250721  844608 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1114 15:03:08.250729  844608 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1114 15:03:08.250738  844608 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1114 15:03:08.250746  844608 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1114 15:03:08.250752  844608 command_runner.go:130] > #   in $PATH.
	I1114 15:03:08.250759  844608 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1114 15:03:08.250766  844608 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1114 15:03:08.250772  844608 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1114 15:03:08.250778  844608 command_runner.go:130] > #   state.
	I1114 15:03:08.250785  844608 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1114 15:03:08.250793  844608 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1114 15:03:08.250801  844608 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1114 15:03:08.250809  844608 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1114 15:03:08.250815  844608 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1114 15:03:08.250823  844608 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1114 15:03:08.250830  844608 command_runner.go:130] > #   The currently recognized values are:
	I1114 15:03:08.250842  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1114 15:03:08.250854  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1114 15:03:08.250862  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1114 15:03:08.250870  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1114 15:03:08.250878  844608 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1114 15:03:08.250887  844608 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1114 15:03:08.250895  844608 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1114 15:03:08.250904  844608 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1114 15:03:08.250910  844608 command_runner.go:130] > #   should be moved to the container's cgroup
	I1114 15:03:08.250916  844608 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1114 15:03:08.250921  844608 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1114 15:03:08.250928  844608 command_runner.go:130] > runtime_type = "oci"
	I1114 15:03:08.250933  844608 command_runner.go:130] > runtime_root = "/run/runc"
	I1114 15:03:08.250941  844608 command_runner.go:130] > runtime_config_path = ""
	I1114 15:03:08.250947  844608 command_runner.go:130] > monitor_path = ""
	I1114 15:03:08.250952  844608 command_runner.go:130] > monitor_cgroup = ""
	I1114 15:03:08.250959  844608 command_runner.go:130] > monitor_exec_cgroup = ""
	I1114 15:03:08.250965  844608 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1114 15:03:08.250972  844608 command_runner.go:130] > # running containers
	I1114 15:03:08.250976  844608 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1114 15:03:08.250985  844608 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1114 15:03:08.251017  844608 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1114 15:03:08.251025  844608 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1114 15:03:08.251034  844608 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1114 15:03:08.251042  844608 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1114 15:03:08.251049  844608 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1114 15:03:08.251055  844608 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1114 15:03:08.251062  844608 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1114 15:03:08.251068  844608 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1114 15:03:08.251080  844608 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1114 15:03:08.251088  844608 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1114 15:03:08.251095  844608 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1114 15:03:08.251104  844608 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1114 15:03:08.251114  844608 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1114 15:03:08.251123  844608 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1114 15:03:08.251132  844608 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1114 15:03:08.251142  844608 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1114 15:03:08.251150  844608 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1114 15:03:08.251159  844608 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1114 15:03:08.251165  844608 command_runner.go:130] > # Example:
	I1114 15:03:08.251170  844608 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1114 15:03:08.251183  844608 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1114 15:03:08.251194  844608 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1114 15:03:08.251205  844608 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1114 15:03:08.251214  844608 command_runner.go:130] > # cpuset = 0
	I1114 15:03:08.251221  844608 command_runner.go:130] > # cpushares = "0-1"
	I1114 15:03:08.251231  844608 command_runner.go:130] > # Where:
	I1114 15:03:08.251243  844608 command_runner.go:130] > # The workload name is workload-type.
	I1114 15:03:08.251256  844608 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1114 15:03:08.251269  844608 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1114 15:03:08.251281  844608 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1114 15:03:08.251296  844608 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1114 15:03:08.251312  844608 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1114 15:03:08.251318  844608 command_runner.go:130] > # 
	I1114 15:03:08.251328  844608 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1114 15:03:08.251333  844608 command_runner.go:130] > #
	I1114 15:03:08.251342  844608 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1114 15:03:08.251351  844608 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1114 15:03:08.251361  844608 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1114 15:03:08.251369  844608 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1114 15:03:08.251376  844608 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1114 15:03:08.251380  844608 command_runner.go:130] > [crio.image]
	I1114 15:03:08.251386  844608 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1114 15:03:08.251394  844608 command_runner.go:130] > # default_transport = "docker://"
	I1114 15:03:08.251400  844608 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1114 15:03:08.251408  844608 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:03:08.251415  844608 command_runner.go:130] > # global_auth_file = ""
	I1114 15:03:08.251420  844608 command_runner.go:130] > # The image used to instantiate infra containers.
	I1114 15:03:08.251427  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:03:08.251432  844608 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1114 15:03:08.251438  844608 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1114 15:03:08.251444  844608 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:03:08.251456  844608 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:03:08.251463  844608 command_runner.go:130] > # pause_image_auth_file = ""
	I1114 15:03:08.251469  844608 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1114 15:03:08.251478  844608 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1114 15:03:08.251486  844608 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1114 15:03:08.251494  844608 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1114 15:03:08.251501  844608 command_runner.go:130] > # pause_command = "/pause"
	I1114 15:03:08.251507  844608 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1114 15:03:08.251519  844608 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1114 15:03:08.251527  844608 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1114 15:03:08.251536  844608 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1114 15:03:08.251542  844608 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1114 15:03:08.251548  844608 command_runner.go:130] > # signature_policy = ""
	I1114 15:03:08.251555  844608 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1114 15:03:08.251563  844608 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1114 15:03:08.251569  844608 command_runner.go:130] > # changing them here.
	I1114 15:03:08.251574  844608 command_runner.go:130] > # insecure_registries = [
	I1114 15:03:08.251580  844608 command_runner.go:130] > # ]
	I1114 15:03:08.251591  844608 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1114 15:03:08.251599  844608 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1114 15:03:08.251605  844608 command_runner.go:130] > # image_volumes = "mkdir"
	I1114 15:03:08.251610  844608 command_runner.go:130] > # Temporary directory to use for storing big files
	I1114 15:03:08.251617  844608 command_runner.go:130] > # big_files_temporary_dir = ""
	I1114 15:03:08.251624  844608 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1114 15:03:08.251630  844608 command_runner.go:130] > # CNI plugins.
	I1114 15:03:08.251635  844608 command_runner.go:130] > [crio.network]
	I1114 15:03:08.251641  844608 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1114 15:03:08.251649  844608 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1114 15:03:08.251656  844608 command_runner.go:130] > # cni_default_network = ""
	I1114 15:03:08.251662  844608 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1114 15:03:08.251669  844608 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1114 15:03:08.251675  844608 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1114 15:03:08.251681  844608 command_runner.go:130] > # plugin_dirs = [
	I1114 15:03:08.251686  844608 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1114 15:03:08.251691  844608 command_runner.go:130] > # ]
	I1114 15:03:08.251697  844608 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1114 15:03:08.251703  844608 command_runner.go:130] > [crio.metrics]
	I1114 15:03:08.251708  844608 command_runner.go:130] > # Globally enable or disable metrics support.
	I1114 15:03:08.251714  844608 command_runner.go:130] > enable_metrics = true
	I1114 15:03:08.251719  844608 command_runner.go:130] > # Specify enabled metrics collectors.
	I1114 15:03:08.251726  844608 command_runner.go:130] > # Per default all metrics are enabled.
	I1114 15:03:08.251732  844608 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1114 15:03:08.251741  844608 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1114 15:03:08.251749  844608 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1114 15:03:08.251755  844608 command_runner.go:130] > # metrics_collectors = [
	I1114 15:03:08.251759  844608 command_runner.go:130] > # 	"operations",
	I1114 15:03:08.251766  844608 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1114 15:03:08.251771  844608 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1114 15:03:08.251777  844608 command_runner.go:130] > # 	"operations_errors",
	I1114 15:03:08.251782  844608 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1114 15:03:08.251788  844608 command_runner.go:130] > # 	"image_pulls_by_name",
	I1114 15:03:08.251792  844608 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1114 15:03:08.251799  844608 command_runner.go:130] > # 	"image_pulls_failures",
	I1114 15:03:08.251803  844608 command_runner.go:130] > # 	"image_pulls_successes",
	I1114 15:03:08.251810  844608 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1114 15:03:08.251814  844608 command_runner.go:130] > # 	"image_layer_reuse",
	I1114 15:03:08.251821  844608 command_runner.go:130] > # 	"containers_oom_total",
	I1114 15:03:08.251825  844608 command_runner.go:130] > # 	"containers_oom",
	I1114 15:03:08.251831  844608 command_runner.go:130] > # 	"processes_defunct",
	I1114 15:03:08.251835  844608 command_runner.go:130] > # 	"operations_total",
	I1114 15:03:08.251842  844608 command_runner.go:130] > # 	"operations_latency_seconds",
	I1114 15:03:08.251846  844608 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1114 15:03:08.251853  844608 command_runner.go:130] > # 	"operations_errors_total",
	I1114 15:03:08.251857  844608 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1114 15:03:08.251864  844608 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1114 15:03:08.251868  844608 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1114 15:03:08.251875  844608 command_runner.go:130] > # 	"image_pulls_success_total",
	I1114 15:03:08.251880  844608 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1114 15:03:08.251886  844608 command_runner.go:130] > # 	"containers_oom_count_total",
	I1114 15:03:08.251890  844608 command_runner.go:130] > # ]
	I1114 15:03:08.251896  844608 command_runner.go:130] > # The port on which the metrics server will listen.
	I1114 15:03:08.251902  844608 command_runner.go:130] > # metrics_port = 9090
	I1114 15:03:08.251908  844608 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1114 15:03:08.251917  844608 command_runner.go:130] > # metrics_socket = ""
	I1114 15:03:08.251922  844608 command_runner.go:130] > # The certificate for the secure metrics server.
	I1114 15:03:08.251930  844608 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1114 15:03:08.251939  844608 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1114 15:03:08.251946  844608 command_runner.go:130] > # certificate on any modification event.
	I1114 15:03:08.251950  844608 command_runner.go:130] > # metrics_cert = ""
	I1114 15:03:08.251957  844608 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1114 15:03:08.251965  844608 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1114 15:03:08.251969  844608 command_runner.go:130] > # metrics_key = ""
	I1114 15:03:08.251975  844608 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1114 15:03:08.251982  844608 command_runner.go:130] > [crio.tracing]
	I1114 15:03:08.251988  844608 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1114 15:03:08.251994  844608 command_runner.go:130] > # enable_tracing = false
	I1114 15:03:08.252000  844608 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1114 15:03:08.252008  844608 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1114 15:03:08.252013  844608 command_runner.go:130] > # Number of samples to collect per million spans.
	I1114 15:03:08.252020  844608 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1114 15:03:08.252026  844608 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1114 15:03:08.252032  844608 command_runner.go:130] > [crio.stats]
	I1114 15:03:08.252042  844608 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1114 15:03:08.252051  844608 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1114 15:03:08.252057  844608 command_runner.go:130] > # stats_collection_period = 0
	I1114 15:03:08.252117  844608 cni.go:84] Creating CNI manager for ""
	I1114 15:03:08.252125  844608 cni.go:136] 2 nodes found, recommending kindnet
	I1114 15:03:08.252134  844608 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:03:08.252155  844608 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-627820 NodeName:multinode-627820-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:03:08.252300  844608 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-627820-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:03:08.252359  844608 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-627820-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:03:08.252464  844608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:03:08.261075  844608 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1114 15:03:08.261505  844608 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1114 15:03:08.261550  844608 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1114 15:03:08.270803  844608 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1114 15:03:08.270831  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1114 15:03:08.270831  844608 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1114 15:03:08.270921  844608 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1114 15:03:08.270831  844608 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1114 15:03:08.278085  844608 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1114 15:03:08.278125  844608 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1114 15:03:08.278141  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1114 15:03:08.887074  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1114 15:03:08.887156  844608 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1114 15:03:08.892767  844608 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1114 15:03:08.893078  844608 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1114 15:03:08.893106  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1114 15:03:09.249948  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:03:09.264018  844608 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubelet -> /var/lib/minikube/binaries/v1.28.3/kubelet
	I1114 15:03:09.264125  844608 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet
	I1114 15:03:09.268329  844608 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1114 15:03:09.268382  844608 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1114 15:03:09.268413  844608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubelet --> /var/lib/minikube/binaries/v1.28.3/kubelet (110780416 bytes)
	I1114 15:03:09.767774  844608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1114 15:03:09.776462  844608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1114 15:03:09.794070  844608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:03:09.812880  844608 ssh_runner.go:195] Run: grep 192.168.39.63	control-plane.minikube.internal$ /etc/hosts
	I1114 15:03:09.817762  844608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:03:09.829802  844608 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:03:09.830125  844608 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:03:09.830148  844608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:03:09.830195  844608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:03:09.844578  844608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I1114 15:03:09.845016  844608 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:03:09.845524  844608 main.go:141] libmachine: Using API Version  1
	I1114 15:03:09.845549  844608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:03:09.845871  844608 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:03:09.846038  844608 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:03:09.846175  844608 start.go:304] JoinCluster: &{Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:03:09.846294  844608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1114 15:03:09.846313  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:03:09.849486  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:03:09.849914  844608 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:03:09.849951  844608 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:03:09.850098  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:03:09.850313  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:03:09.850478  844608 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:03:09.850607  844608 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:03:10.012937  844608 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a0o1c5.re5m8xy5qwtss7do --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:03:10.013022  844608 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 15:03:10.013062  844608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a0o1c5.re5m8xy5qwtss7do --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-627820-m02"
	I1114 15:03:10.060648  844608 command_runner.go:130] > [preflight] Running pre-flight checks
	I1114 15:03:10.202671  844608 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1114 15:03:10.202707  844608 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1114 15:03:10.237248  844608 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:03:10.237275  844608 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:03:10.237280  844608 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 15:03:10.358574  844608 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1114 15:03:12.370231  844608 command_runner.go:130] > This node has joined the cluster:
	I1114 15:03:12.370261  844608 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1114 15:03:12.370270  844608 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1114 15:03:12.370278  844608 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1114 15:03:12.372032  844608 command_runner.go:130] ! W1114 15:03:10.037984     827 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1114 15:03:12.372066  844608 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:03:12.372093  844608 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a0o1c5.re5m8xy5qwtss7do --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-627820-m02": (2.359013427s)
	I1114 15:03:12.372114  844608 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1114 15:03:12.591915  844608 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1114 15:03:12.591970  844608 start.go:306] JoinCluster complete in 2.745794898s
	I1114 15:03:12.591999  844608 cni.go:84] Creating CNI manager for ""
	I1114 15:03:12.592011  844608 cni.go:136] 2 nodes found, recommending kindnet
	I1114 15:03:12.592081  844608 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 15:03:12.597557  844608 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 15:03:12.597587  844608 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1114 15:03:12.597608  844608 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1114 15:03:12.597623  844608 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:03:12.597636  844608 command_runner.go:130] > Access: 2023-11-14 15:01:48.037963128 +0000
	I1114 15:03:12.597649  844608 command_runner.go:130] > Modify: 2023-11-09 04:45:09.000000000 +0000
	I1114 15:03:12.597661  844608 command_runner.go:130] > Change: 2023-11-14 15:01:46.199963128 +0000
	I1114 15:03:12.597671  844608 command_runner.go:130] >  Birth: -
	I1114 15:03:12.597959  844608 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 15:03:12.597979  844608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 15:03:12.616530  844608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 15:03:12.919733  844608 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1114 15:03:12.919763  844608 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1114 15:03:12.919771  844608 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1114 15:03:12.919778  844608 command_runner.go:130] > daemonset.apps/kindnet configured
	I1114 15:03:12.920250  844608 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:03:12.920618  844608 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:03:12.921066  844608 round_trippers.go:463] GET https://192.168.39.63:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 15:03:12.921095  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:12.921108  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:12.921118  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:12.923081  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:03:12.923102  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:12.923118  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:12 GMT
	I1114 15:03:12.923124  844608 round_trippers.go:580]     Audit-Id: def4278b-8b8a-4531-8d11-2f4e75295fd0
	I1114 15:03:12.923129  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:12.923134  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:12.923139  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:12.923144  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:12.923148  844608 round_trippers.go:580]     Content-Length: 291
	I1114 15:03:12.923169  844608 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"57bccca2-f0e4-486c-b5a0-3985938d2dae","resourceVersion":"403","creationTimestamp":"2023-11-14T15:02:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1114 15:03:12.923253  844608 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-627820" context rescaled to 1 replicas
	I1114 15:03:12.923282  844608 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 15:03:12.926272  844608 out.go:177] * Verifying Kubernetes components...
	I1114 15:03:12.927720  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:03:12.942320  844608 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:03:12.942538  844608 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:03:12.942820  844608 node_ready.go:35] waiting up to 6m0s for node "multinode-627820-m02" to be "Ready" ...
	I1114 15:03:12.942899  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:12.942907  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:12.942914  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:12.942920  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:12.945842  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:12.945866  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:12.945876  844608 round_trippers.go:580]     Content-Length: 3530
	I1114 15:03:12.945884  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:12 GMT
	I1114 15:03:12.945893  844608 round_trippers.go:580]     Audit-Id: e8a4c153-edf2-4494-9c30-856ee43b51e9
	I1114 15:03:12.945914  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:12.945927  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:12.945943  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:12.945951  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:12.946256  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"451","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1114 15:03:12.946604  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:12.946624  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:12.946635  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:12.946645  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:12.948973  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:12.949005  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:12.949014  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:12.949022  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:12.949030  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:12.949037  844608 round_trippers.go:580]     Content-Length: 3530
	I1114 15:03:12.949045  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:12 GMT
	I1114 15:03:12.949057  844608 round_trippers.go:580]     Audit-Id: dd4d593d-5775-4a7f-b6eb-3e9953cbb88e
	I1114 15:03:12.949069  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:12.949209  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"451","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1114 15:03:13.450482  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:13.450519  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:13.450529  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:13.450536  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:13.453817  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:13.453841  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:13.453848  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:13.453853  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:13.453861  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:13.453870  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:13.453878  844608 round_trippers.go:580]     Content-Length: 3530
	I1114 15:03:13.453887  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:13 GMT
	I1114 15:03:13.453896  844608 round_trippers.go:580]     Audit-Id: ea9ca862-8afd-4c46-a644-d61c8039e36e
	I1114 15:03:13.453992  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"451","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1114 15:03:13.950617  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:13.950649  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:13.950667  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:13.950674  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:13.953265  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:13.953288  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:13.953295  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:13.953301  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:13.953305  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:13.953310  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:13.953315  844608 round_trippers.go:580]     Content-Length: 3530
	I1114 15:03:13.953320  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:13 GMT
	I1114 15:03:13.953325  844608 round_trippers.go:580]     Audit-Id: 32c0a66b-8464-4a8c-9cdc-75db7ece45f7
	I1114 15:03:13.953380  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"451","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1114 15:03:14.449988  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:14.450020  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:14.450029  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:14.450036  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:14.453556  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:14.453619  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:14.453660  844608 round_trippers.go:580]     Content-Length: 3530
	I1114 15:03:14.453670  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:14 GMT
	I1114 15:03:14.453679  844608 round_trippers.go:580]     Audit-Id: da2ea165-f152-4752-bbc1-dfe85dc4a076
	I1114 15:03:14.453684  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:14.453692  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:14.453699  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:14.453711  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:14.453846  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"451","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1114 15:03:14.950628  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:14.950662  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:14.950675  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:14.950685  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:14.953505  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:14.953526  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:14.953533  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:14.953538  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:14.953548  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:14.953553  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:14.953560  844608 round_trippers.go:580]     Content-Length: 3530
	I1114 15:03:14.953568  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:14 GMT
	I1114 15:03:14.953578  844608 round_trippers.go:580]     Audit-Id: b14f32f6-0734-4216-bd13-f878e87134e7
	I1114 15:03:14.953670  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"451","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1114 15:03:14.953982  844608 node_ready.go:58] node "multinode-627820-m02" has status "Ready":"False"
	I1114 15:03:15.450602  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:15.450634  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:15.450649  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:15.450659  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:15.453447  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:15.453468  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:15.453475  844608 round_trippers.go:580]     Audit-Id: d5819352-793a-4a05-8df8-e7af99efa4d5
	I1114 15:03:15.453480  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:15.453485  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:15.453491  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:15.453496  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:15.453501  844608 round_trippers.go:580]     Content-Length: 3530
	I1114 15:03:15.453506  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:15 GMT
	I1114 15:03:15.453611  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"451","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1114 15:03:15.949931  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:15.949979  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:15.950010  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:15.950022  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:16.004777  844608 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I1114 15:03:16.004807  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:16.004819  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:16.004827  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:16.004835  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:15 GMT
	I1114 15:03:16.004843  844608 round_trippers.go:580]     Audit-Id: b9105ab3-128b-4c4d-b2b4-2a16d96c5680
	I1114 15:03:16.004850  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:16.004858  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:16.004867  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:16.005068  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:16.450514  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:16.450547  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:16.450559  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:16.450566  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:16.454197  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:16.454239  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:16.454250  844608 round_trippers.go:580]     Audit-Id: 0066deca-9c28-497d-8e89-61a4f1446f76
	I1114 15:03:16.454259  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:16.454267  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:16.454276  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:16.454285  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:16.454298  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:16.454310  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:16 GMT
	I1114 15:03:16.454586  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:16.949968  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:16.949996  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:16.950005  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:16.950011  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:16.952721  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:16.952799  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:16.952816  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:16.952832  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:16.952850  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:16.952862  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:16.952872  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:16 GMT
	I1114 15:03:16.952883  844608 round_trippers.go:580]     Audit-Id: db9c1096-377a-4724-855b-f082ab00360c
	I1114 15:03:16.952892  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:16.952976  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:17.450056  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:17.450085  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:17.450093  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:17.450100  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:17.453485  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:17.453509  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:17.453520  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:17.453530  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:17.453543  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:17.453553  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:17.453563  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:17.453571  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:17 GMT
	I1114 15:03:17.453580  844608 round_trippers.go:580]     Audit-Id: 279b43cb-b3e8-45ca-8b9f-d1e4c9ab7479
	I1114 15:03:17.453675  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:17.454005  844608 node_ready.go:58] node "multinode-627820-m02" has status "Ready":"False"
	I1114 15:03:17.950274  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:17.950308  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:17.950321  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:17.950331  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:17.953434  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:17.953461  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:17.953472  844608 round_trippers.go:580]     Audit-Id: 12395f81-1128-4249-8145-b226cfd087bd
	I1114 15:03:17.953480  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:17.953488  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:17.953494  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:17.953513  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:17.953518  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:17.953524  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:17 GMT
	I1114 15:03:17.953613  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:18.449825  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:18.449861  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:18.449873  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:18.449883  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:18.453132  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:18.453159  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:18.453169  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:18 GMT
	I1114 15:03:18.453178  844608 round_trippers.go:580]     Audit-Id: df2166a3-b443-45d1-b2e6-a6b2c684a0a7
	I1114 15:03:18.453186  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:18.453194  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:18.453203  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:18.453211  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:18.453219  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:18.453396  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:18.950627  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:18.950656  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:18.950665  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:18.950671  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:18.953573  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:18.953601  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:18.953613  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:18.953622  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:18.953638  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:18.953648  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:18.953656  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:18.953667  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:18 GMT
	I1114 15:03:18.953675  844608 round_trippers.go:580]     Audit-Id: fc1a5cb1-4931-4299-a13d-802e2c194590
	I1114 15:03:18.953783  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:19.449807  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:19.449845  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:19.449853  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:19.449860  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:19.452870  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:19.452898  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:19.452909  844608 round_trippers.go:580]     Audit-Id: f3e52695-f613-4dda-9281-39d65e2d9867
	I1114 15:03:19.452918  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:19.452926  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:19.452934  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:19.452952  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:19.452968  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:19.452976  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:19 GMT
	I1114 15:03:19.453061  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:19.950006  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:19.950032  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:19.950040  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:19.950046  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:19.953575  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:19.953597  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:19.953604  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:19.953609  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:19.953614  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:19.953619  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:19 GMT
	I1114 15:03:19.953626  844608 round_trippers.go:580]     Audit-Id: dfff0e21-0399-4e03-b6e2-6ac85a37a5b2
	I1114 15:03:19.953634  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:19.953642  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:19.953697  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:19.953946  844608 node_ready.go:58] node "multinode-627820-m02" has status "Ready":"False"
	I1114 15:03:20.450060  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:20.450085  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:20.450097  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:20.450104  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:20.452767  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:20.452789  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:20.452795  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:20 GMT
	I1114 15:03:20.452801  844608 round_trippers.go:580]     Audit-Id: cdac684e-7cf9-4c64-84d7-d5e379c5ba0f
	I1114 15:03:20.452806  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:20.452813  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:20.452821  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:20.452831  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:20.452843  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:20.452935  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:20.950571  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:20.950602  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:20.950613  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:20.950628  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:20.953926  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:20.953950  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:20.953956  844608 round_trippers.go:580]     Content-Length: 3639
	I1114 15:03:20.953962  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:20 GMT
	I1114 15:03:20.953969  844608 round_trippers.go:580]     Audit-Id: 2ea0260d-3848-4550-b8bf-7a3246762d0a
	I1114 15:03:20.953976  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:20.953987  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:20.953994  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:20.954006  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:20.954139  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"459","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1114 15:03:21.450735  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:21.450764  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.450772  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.450779  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.453425  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:21.453459  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.453470  844608 round_trippers.go:580]     Content-Length: 3725
	I1114 15:03:21.453475  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.453480  844608 round_trippers.go:580]     Audit-Id: f6f8c658-6cfa-4f99-8b22-42e8612bfb67
	I1114 15:03:21.453485  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.453498  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.453512  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.453520  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.453638  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"479","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I1114 15:03:21.453914  844608 node_ready.go:49] node "multinode-627820-m02" has status "Ready":"True"
	I1114 15:03:21.453933  844608 node_ready.go:38] duration metric: took 8.511093915s waiting for node "multinode-627820-m02" to be "Ready" ...
	I1114 15:03:21.453944  844608 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:03:21.454037  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:03:21.454057  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.454068  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.454078  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.457758  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:21.457782  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.457821  844608 round_trippers.go:580]     Audit-Id: 25eacc6e-4da0-480b-9d26-7a324ab56d73
	I1114 15:03:21.457840  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.457855  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.457865  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.457880  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.457888  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.459146  844608 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"480"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"399","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67322 chars]
	I1114 15:03:21.461507  844608 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.461590  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:03:21.461601  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.461612  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.461622  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.463615  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:03:21.463639  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.463649  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.463659  844608 round_trippers.go:580]     Audit-Id: eda9e485-54fb-4bcd-ad26-840f59f11bb0
	I1114 15:03:21.463677  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.463691  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.463700  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.463710  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.463883  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"399","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1114 15:03:21.464450  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:03:21.464469  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.464477  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.464483  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.466639  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:21.466657  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.466666  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.466674  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.466681  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.466690  844608 round_trippers.go:580]     Audit-Id: 49e2ad65-0808-439b-8b8e-b872daea94e0
	I1114 15:03:21.466707  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.466728  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.467000  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:03:21.467373  844608 pod_ready.go:92] pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace has status "Ready":"True"
	I1114 15:03:21.467397  844608 pod_ready.go:81] duration metric: took 5.866454ms waiting for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.467406  844608 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.467491  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-627820
	I1114 15:03:21.467500  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.467507  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.467515  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.469395  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:03:21.469413  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.469422  844608 round_trippers.go:580]     Audit-Id: bd50a523-9df7-4496-bfb7-cd5183f8c4a1
	I1114 15:03:21.469430  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.469438  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.469449  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.469459  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.469471  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.470075  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-627820","namespace":"kube-system","uid":"f7ab1cba-820a-4cad-8607-dcf55b587b77","resourceVersion":"333","creationTimestamp":"2023-11-14T15:02:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.63:2379","kubernetes.io/config.hash":"9e94d5d69871d944e272883491976489","kubernetes.io/config.mirror":"9e94d5d69871d944e272883491976489","kubernetes.io/config.seen":"2023-11-14T15:02:10.404956486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1114 15:03:21.470693  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:03:21.470734  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.470754  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.470776  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.474139  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:21.474172  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.474183  844608 round_trippers.go:580]     Audit-Id: 52c00801-cebe-4c2d-aac8-8147b0b87e74
	I1114 15:03:21.474192  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.474208  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.474217  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.474226  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.474240  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.474862  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:03:21.475176  844608 pod_ready.go:92] pod "etcd-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:03:21.475193  844608 pod_ready.go:81] duration metric: took 7.757857ms waiting for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.475210  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.475267  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-627820
	I1114 15:03:21.475278  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.475289  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.475302  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.477383  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:21.477406  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.477417  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.477428  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.477437  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.477454  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.477462  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.477473  844608 round_trippers.go:580]     Audit-Id: 0bdd9531-e36c-4453-9280-191c2dd89cec
	I1114 15:03:21.477803  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-627820","namespace":"kube-system","uid":"8a9b9224-3446-46f7-b525-e1f32bb9a33c","resourceVersion":"348","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.63:8443","kubernetes.io/config.hash":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.mirror":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.seen":"2023-11-14T15:02:19.515752674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1114 15:03:21.478151  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:03:21.478163  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.478170  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.478176  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.481392  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:21.481413  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.481427  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.481437  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.481448  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.481461  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.481470  844608 round_trippers.go:580]     Audit-Id: 50eaf605-82c4-472f-afe0-deed1ea74b64
	I1114 15:03:21.481483  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.481629  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:03:21.481924  844608 pod_ready.go:92] pod "kube-apiserver-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:03:21.481943  844608 pod_ready.go:81] duration metric: took 6.721132ms waiting for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.481955  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.482009  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-627820
	I1114 15:03:21.482018  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.482029  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.482040  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.483880  844608 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:03:21.483901  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.483911  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.483937  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.483950  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.483962  844608 round_trippers.go:580]     Audit-Id: ed4ac3f7-c089-40de-a04c-dc6bcabae245
	I1114 15:03:21.483974  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.483986  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.484306  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-627820","namespace":"kube-system","uid":"b4440d06-27f9-4455-ae59-2d8c744b99a2","resourceVersion":"268","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.mirror":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.seen":"2023-11-14T15:02:19.515747223Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1114 15:03:21.484786  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:03:21.484807  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.484818  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.484832  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.488077  844608 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:03:21.488092  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.488114  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.488134  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.488148  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.488157  844608 round_trippers.go:580]     Audit-Id: 3d0a24c9-96e8-468d-8de7-475ff95f63b0
	I1114 15:03:21.488166  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.488173  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.488332  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:03:21.488705  844608 pod_ready.go:92] pod "kube-controller-manager-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:03:21.488725  844608 pod_ready.go:81] duration metric: took 6.760697ms waiting for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.488752  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.651232  844608 request.go:629] Waited for 162.402957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:03:21.651333  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:03:21.651340  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.651353  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.651368  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.653985  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:21.654011  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.654021  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.654030  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.654038  844608 round_trippers.go:580]     Audit-Id: 1070f2f5-9074-42e4-a73a-97ea6ae4d044
	I1114 15:03:21.654046  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.654054  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.654061  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.654378  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6xg9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"2304a457-3a85-4791-8d18-4e1262db399f","resourceVersion":"467","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1114 15:03:21.851214  844608 request.go:629] Waited for 196.395366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:21.851287  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:03:21.851293  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:21.851304  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:21.851315  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:21.853758  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:21.853783  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:21.853794  844608 round_trippers.go:580]     Audit-Id: eeffe5c2-7328-4ea0-b2b8-a42206e9692e
	I1114 15:03:21.853802  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:21.853811  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:21.853820  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:21.853829  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:21.853840  844608 round_trippers.go:580]     Content-Length: 3725
	I1114 15:03:21.853849  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:21 GMT
	I1114 15:03:21.853950  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"479","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I1114 15:03:21.854232  844608 pod_ready.go:92] pod "kube-proxy-6xg9v" in "kube-system" namespace has status "Ready":"True"
	I1114 15:03:21.854254  844608 pod_ready.go:81] duration metric: took 365.487238ms waiting for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:21.854269  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:22.051331  844608 request.go:629] Waited for 196.95288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:03:22.051401  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:03:22.051406  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:22.051414  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:22.051423  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:22.054160  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:22.054195  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:22.054205  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:22.054214  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:22 GMT
	I1114 15:03:22.054222  844608 round_trippers.go:580]     Audit-Id: ef16886c-f818-443e-8af6-6bdd8e6b565f
	I1114 15:03:22.054229  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:22.054238  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:22.054247  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:22.054377  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m24mc","generateName":"kube-proxy-","namespace":"kube-system","uid":"73a6d4c8-2f95-4818-bc62-566099466b42","resourceVersion":"372","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5513 chars]
	I1114 15:03:22.251241  844608 request.go:629] Waited for 196.391806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:03:22.251328  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:03:22.251334  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:22.251349  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:22.251363  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:22.253496  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:22.253519  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:22.253528  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:22.253536  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:22.253545  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:22.253555  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:22.253567  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:22 GMT
	I1114 15:03:22.253580  844608 round_trippers.go:580]     Audit-Id: 8b9f4d8d-d54b-456d-a176-05c54fd2639e
	I1114 15:03:22.253787  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:03:22.254158  844608 pod_ready.go:92] pod "kube-proxy-m24mc" in "kube-system" namespace has status "Ready":"True"
	I1114 15:03:22.254188  844608 pod_ready.go:81] duration metric: took 399.90703ms waiting for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:22.254200  844608 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:22.451666  844608 request.go:629] Waited for 197.368591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:03:22.451745  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:03:22.451753  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:22.451765  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:22.451780  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:22.454376  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:22.454400  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:22.454409  844608 round_trippers.go:580]     Audit-Id: 4a2af381-f7b3-46c5-a46c-fb1f118b28fd
	I1114 15:03:22.454417  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:22.454424  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:22.454436  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:22.454462  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:22.454472  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:22 GMT
	I1114 15:03:22.454610  844608 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-627820","namespace":"kube-system","uid":"ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd","resourceVersion":"281","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.mirror":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.seen":"2023-11-14T15:02:19.515750784Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1114 15:03:22.651388  844608 request.go:629] Waited for 196.379691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:03:22.651456  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:03:22.651460  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:22.651468  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:22.651476  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:22.653897  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:22.653926  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:22.653936  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:22.653944  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:22.653952  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:22.653972  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:22 GMT
	I1114 15:03:22.653985  844608 round_trippers.go:580]     Audit-Id: c88e2359-0631-46f6-9836-847752936c76
	I1114 15:03:22.653990  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:22.654245  844608 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1114 15:03:22.654626  844608 pod_ready.go:92] pod "kube-scheduler-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:03:22.654645  844608 pod_ready.go:81] duration metric: took 400.43714ms waiting for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:03:22.654655  844608 pod_ready.go:38] duration metric: took 1.200696862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:03:22.654669  844608 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:03:22.654721  844608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:03:22.668149  844608 system_svc.go:56] duration metric: took 13.471327ms WaitForService to wait for kubelet.
	I1114 15:03:22.668177  844608 kubeadm.go:581] duration metric: took 9.7448667s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:03:22.668205  844608 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:03:22.851670  844608 request.go:629] Waited for 183.378489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes
	I1114 15:03:22.851734  844608 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes
	I1114 15:03:22.851739  844608 round_trippers.go:469] Request Headers:
	I1114 15:03:22.851747  844608 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:03:22.851754  844608 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:03:22.854624  844608 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:03:22.854653  844608 round_trippers.go:577] Response Headers:
	I1114 15:03:22.854664  844608 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:03:22 GMT
	I1114 15:03:22.854672  844608 round_trippers.go:580]     Audit-Id: 748bc82e-320a-4b38-adaf-be7628a98170
	I1114 15:03:22.854680  844608 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:03:22.854686  844608 round_trippers.go:580]     Content-Type: application/json
	I1114 15:03:22.854693  844608 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:03:22.854701  844608 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:03:22.855519  844608 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"482"},"items":[{"metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"382","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9644 chars]
	I1114 15:03:22.856089  844608 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:03:22.856133  844608 node_conditions.go:123] node cpu capacity is 2
	I1114 15:03:22.856144  844608 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:03:22.856148  844608 node_conditions.go:123] node cpu capacity is 2
	I1114 15:03:22.856157  844608 node_conditions.go:105] duration metric: took 187.945765ms to run NodePressure ...
	I1114 15:03:22.856169  844608 start.go:228] waiting for startup goroutines ...
	I1114 15:03:22.856197  844608 start.go:242] writing updated cluster config ...
	I1114 15:03:22.856493  844608 ssh_runner.go:195] Run: rm -f paused
	I1114 15:03:22.908147  844608 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:03:22.911382  844608 out.go:177] * Done! kubectl is now configured to use "multinode-627820" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:01:46 UTC, ends at Tue 2023-11-14 15:03:30 UTC. --
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.587639258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699974210587623255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ae14bf4e-4834-4d42-8c7b-dd24e7dbd071 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.588424469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7185baf7-7afb-4455-90b8-56c12eb6f96c name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.588470310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7185baf7-7afb-4455-90b8-56c12eb6f96c name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.588639642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:384c0f2c1dc699cb26a41ab8d1f5b434f78479fb00f0fc5f341ad13a35114565,PodSandboxId:da68c0e8c442a1a4f810482fe2517d0f3332a363efbaabcfab17157848ee7c64,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699974205729598772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nqqlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e733c69a-d862-453f-9b5b-c634e5adc2e8,},Annotations:map[string]string{io.kubernetes.container.hash: c4f6194d,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f812b98b794ad95f999077faad5686097f7c7d3fe7de83d9114368b76ca659,PodSandboxId:57b2559720b582ffc716c4037e169748c996658dad1db43d725d26a68d049917,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699974158222362587,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vh8ng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25afe3b4-014e-4180-9597-fb237d622c81,},Annotations:map[string]string{io.kubernetes.container.hash: d037224b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d493ccb2674e7f1b2ba6985267a6c6dfcbb845cd3729b3e94022afbd5d233c9,PodSandboxId:881384aa65999d2812ab5cd9113c7ced61f999a03af136b156f164dcf5cee732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699974158047849875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d283f6881813211f49ca99e26a8601de2d71e8981c6e9636e0439b852b5ff850,PodSandboxId:7ba1d0d1c3cffa78ffb3e6a5f090e41caaad02c1fe70b867fdb0a517a1612ce4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699974155410789221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8xnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 457f993f-4895-488a-8277-d5187afda5d3,},Annotations:map[string]string{io.kubernetes.container.hash: a5cef35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe61ed38fc44640aa641eadc3529fdbb0a7c9c7295adafa9fba093f724d27134,PodSandboxId:2cda15466b77cb649843fdcb19dab3afe45d120cf096e246bfc5e8fc7448e34d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699974153285781969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m24mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a6d4c8-2f95-4818-bc62-566099
466b42,},Annotations:map[string]string{io.kubernetes.container.hash: a2d657ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d905e8ca7334c0a81a8321b0e52eb1e14b3e1c2d16a086a2652501f1f4775a4,PodSandboxId:3f285aa11c740ee359b19fee27c73be7ad7d5d31387cd7af98292b51995fa3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699974132024367704,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e94d5d69871d944e272883491976489,},Annotations:map[string]string{io.kubernetes
.container.hash: 7a999a98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38842b79258e49e8204516b8e7ff6e58f6b9de2880a21cc788829ebb75edb277,PodSandboxId:6b69f36665fb3618c153ea34faeaaeb71cfff3045f50944345ff8bb2d4e57ebe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699974131709142061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618073575d26c84596a59c7ddac9e2b1,},Annotations:map[string]string{io.kubernetes.container.h
ash: 4aacc9a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338ab32e0745ad98e7d965f3df1e9aa735a620ca9c366af950eff3fe3ba6a5bd,PodSandboxId:b42d67be47150a71923c498a31615e80131fca8b1c8ca1bafa8309a9e5c31644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699974131527766343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b103d6782e9472dc1801b82c4447b3dd,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113244ff5079a4bca0925504598cd12ef5bc9c3c24995725c8c9b763fa5f8c3b,PodSandboxId:1bb7431da6db1d1989f4dcb6832c3716e7ac0ea4908ca34d2ed11c46777b913a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699974131452398150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc53a6a3186a398cdb1e8e8d082916a,},Annotations:map[string]string{io.kubernetes
.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7185baf7-7afb-4455-90b8-56c12eb6f96c name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.624241175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1dcb8442-eda6-47cb-8b9e-bffac3ebad5c name=/runtime.v1.RuntimeService/Version
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.624286892Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1dcb8442-eda6-47cb-8b9e-bffac3ebad5c name=/runtime.v1.RuntimeService/Version
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.627744162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=928a679a-7078-4676-a33a-5c8d00a91ca0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.628226954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699974210628212955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=928a679a-7078-4676-a33a-5c8d00a91ca0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.628684039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4bbc15fa-cafa-4866-91b0-0d9461b02806 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.628725510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4bbc15fa-cafa-4866-91b0-0d9461b02806 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.628896500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:384c0f2c1dc699cb26a41ab8d1f5b434f78479fb00f0fc5f341ad13a35114565,PodSandboxId:da68c0e8c442a1a4f810482fe2517d0f3332a363efbaabcfab17157848ee7c64,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699974205729598772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nqqlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e733c69a-d862-453f-9b5b-c634e5adc2e8,},Annotations:map[string]string{io.kubernetes.container.hash: c4f6194d,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f812b98b794ad95f999077faad5686097f7c7d3fe7de83d9114368b76ca659,PodSandboxId:57b2559720b582ffc716c4037e169748c996658dad1db43d725d26a68d049917,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699974158222362587,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vh8ng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25afe3b4-014e-4180-9597-fb237d622c81,},Annotations:map[string]string{io.kubernetes.container.hash: d037224b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d493ccb2674e7f1b2ba6985267a6c6dfcbb845cd3729b3e94022afbd5d233c9,PodSandboxId:881384aa65999d2812ab5cd9113c7ced61f999a03af136b156f164dcf5cee732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699974158047849875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d283f6881813211f49ca99e26a8601de2d71e8981c6e9636e0439b852b5ff850,PodSandboxId:7ba1d0d1c3cffa78ffb3e6a5f090e41caaad02c1fe70b867fdb0a517a1612ce4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699974155410789221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8xnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 457f993f-4895-488a-8277-d5187afda5d3,},Annotations:map[string]string{io.kubernetes.container.hash: a5cef35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe61ed38fc44640aa641eadc3529fdbb0a7c9c7295adafa9fba093f724d27134,PodSandboxId:2cda15466b77cb649843fdcb19dab3afe45d120cf096e246bfc5e8fc7448e34d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699974153285781969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m24mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a6d4c8-2f95-4818-bc62-566099
466b42,},Annotations:map[string]string{io.kubernetes.container.hash: a2d657ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d905e8ca7334c0a81a8321b0e52eb1e14b3e1c2d16a086a2652501f1f4775a4,PodSandboxId:3f285aa11c740ee359b19fee27c73be7ad7d5d31387cd7af98292b51995fa3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699974132024367704,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e94d5d69871d944e272883491976489,},Annotations:map[string]string{io.kubernetes
.container.hash: 7a999a98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38842b79258e49e8204516b8e7ff6e58f6b9de2880a21cc788829ebb75edb277,PodSandboxId:6b69f36665fb3618c153ea34faeaaeb71cfff3045f50944345ff8bb2d4e57ebe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699974131709142061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618073575d26c84596a59c7ddac9e2b1,},Annotations:map[string]string{io.kubernetes.container.h
ash: 4aacc9a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338ab32e0745ad98e7d965f3df1e9aa735a620ca9c366af950eff3fe3ba6a5bd,PodSandboxId:b42d67be47150a71923c498a31615e80131fca8b1c8ca1bafa8309a9e5c31644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699974131527766343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b103d6782e9472dc1801b82c4447b3dd,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113244ff5079a4bca0925504598cd12ef5bc9c3c24995725c8c9b763fa5f8c3b,PodSandboxId:1bb7431da6db1d1989f4dcb6832c3716e7ac0ea4908ca34d2ed11c46777b913a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699974131452398150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc53a6a3186a398cdb1e8e8d082916a,},Annotations:map[string]string{io.kubernetes
.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4bbc15fa-cafa-4866-91b0-0d9461b02806 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.677850343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=18256117-506d-420f-aac3-e19a4bbfbb51 name=/runtime.v1.RuntimeService/Version
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.677905566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=18256117-506d-420f-aac3-e19a4bbfbb51 name=/runtime.v1.RuntimeService/Version
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.678958076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fa9b583e-1f77-43e9-bfb9-b45d5981ba8f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.679360042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699974210679348597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fa9b583e-1f77-43e9-bfb9-b45d5981ba8f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.679907229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7696832b-feb9-48d8-b828-fd76ec56ad7a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.679952241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7696832b-feb9-48d8-b828-fd76ec56ad7a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.680533658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:384c0f2c1dc699cb26a41ab8d1f5b434f78479fb00f0fc5f341ad13a35114565,PodSandboxId:da68c0e8c442a1a4f810482fe2517d0f3332a363efbaabcfab17157848ee7c64,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699974205729598772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nqqlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e733c69a-d862-453f-9b5b-c634e5adc2e8,},Annotations:map[string]string{io.kubernetes.container.hash: c4f6194d,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f812b98b794ad95f999077faad5686097f7c7d3fe7de83d9114368b76ca659,PodSandboxId:57b2559720b582ffc716c4037e169748c996658dad1db43d725d26a68d049917,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699974158222362587,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vh8ng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25afe3b4-014e-4180-9597-fb237d622c81,},Annotations:map[string]string{io.kubernetes.container.hash: d037224b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d493ccb2674e7f1b2ba6985267a6c6dfcbb845cd3729b3e94022afbd5d233c9,PodSandboxId:881384aa65999d2812ab5cd9113c7ced61f999a03af136b156f164dcf5cee732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699974158047849875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d283f6881813211f49ca99e26a8601de2d71e8981c6e9636e0439b852b5ff850,PodSandboxId:7ba1d0d1c3cffa78ffb3e6a5f090e41caaad02c1fe70b867fdb0a517a1612ce4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699974155410789221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8xnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 457f993f-4895-488a-8277-d5187afda5d3,},Annotations:map[string]string{io.kubernetes.container.hash: a5cef35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe61ed38fc44640aa641eadc3529fdbb0a7c9c7295adafa9fba093f724d27134,PodSandboxId:2cda15466b77cb649843fdcb19dab3afe45d120cf096e246bfc5e8fc7448e34d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699974153285781969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m24mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a6d4c8-2f95-4818-bc62-566099
466b42,},Annotations:map[string]string{io.kubernetes.container.hash: a2d657ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d905e8ca7334c0a81a8321b0e52eb1e14b3e1c2d16a086a2652501f1f4775a4,PodSandboxId:3f285aa11c740ee359b19fee27c73be7ad7d5d31387cd7af98292b51995fa3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699974132024367704,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e94d5d69871d944e272883491976489,},Annotations:map[string]string{io.kubernetes
.container.hash: 7a999a98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38842b79258e49e8204516b8e7ff6e58f6b9de2880a21cc788829ebb75edb277,PodSandboxId:6b69f36665fb3618c153ea34faeaaeb71cfff3045f50944345ff8bb2d4e57ebe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699974131709142061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618073575d26c84596a59c7ddac9e2b1,},Annotations:map[string]string{io.kubernetes.container.h
ash: 4aacc9a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338ab32e0745ad98e7d965f3df1e9aa735a620ca9c366af950eff3fe3ba6a5bd,PodSandboxId:b42d67be47150a71923c498a31615e80131fca8b1c8ca1bafa8309a9e5c31644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699974131527766343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b103d6782e9472dc1801b82c4447b3dd,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113244ff5079a4bca0925504598cd12ef5bc9c3c24995725c8c9b763fa5f8c3b,PodSandboxId:1bb7431da6db1d1989f4dcb6832c3716e7ac0ea4908ca34d2ed11c46777b913a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699974131452398150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc53a6a3186a398cdb1e8e8d082916a,},Annotations:map[string]string{io.kubernetes
.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7696832b-feb9-48d8-b828-fd76ec56ad7a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.720207974Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3dd5c5a3-06cf-4a8c-a614-cc506b79ce85 name=/runtime.v1.RuntimeService/Version
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.720262817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3dd5c5a3-06cf-4a8c-a614-cc506b79ce85 name=/runtime.v1.RuntimeService/Version
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.724952480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0c477222-edec-4fd5-a539-92b1e9b2e71a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.725453163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699974210725385582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0c477222-edec-4fd5-a539-92b1e9b2e71a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.727102129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a095bd59-ad49-4050-a191-de238a8abec8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.727154239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a095bd59-ad49-4050-a191-de238a8abec8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:03:30 multinode-627820 crio[716]: time="2023-11-14 15:03:30.727334124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:384c0f2c1dc699cb26a41ab8d1f5b434f78479fb00f0fc5f341ad13a35114565,PodSandboxId:da68c0e8c442a1a4f810482fe2517d0f3332a363efbaabcfab17157848ee7c64,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699974205729598772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nqqlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e733c69a-d862-453f-9b5b-c634e5adc2e8,},Annotations:map[string]string{io.kubernetes.container.hash: c4f6194d,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f812b98b794ad95f999077faad5686097f7c7d3fe7de83d9114368b76ca659,PodSandboxId:57b2559720b582ffc716c4037e169748c996658dad1db43d725d26a68d049917,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699974158222362587,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vh8ng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25afe3b4-014e-4180-9597-fb237d622c81,},Annotations:map[string]string{io.kubernetes.container.hash: d037224b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d493ccb2674e7f1b2ba6985267a6c6dfcbb845cd3729b3e94022afbd5d233c9,PodSandboxId:881384aa65999d2812ab5cd9113c7ced61f999a03af136b156f164dcf5cee732,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699974158047849875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d283f6881813211f49ca99e26a8601de2d71e8981c6e9636e0439b852b5ff850,PodSandboxId:7ba1d0d1c3cffa78ffb3e6a5f090e41caaad02c1fe70b867fdb0a517a1612ce4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699974155410789221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8xnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 457f993f-4895-488a-8277-d5187afda5d3,},Annotations:map[string]string{io.kubernetes.container.hash: a5cef35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe61ed38fc44640aa641eadc3529fdbb0a7c9c7295adafa9fba093f724d27134,PodSandboxId:2cda15466b77cb649843fdcb19dab3afe45d120cf096e246bfc5e8fc7448e34d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699974153285781969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m24mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a6d4c8-2f95-4818-bc62-566099
466b42,},Annotations:map[string]string{io.kubernetes.container.hash: a2d657ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d905e8ca7334c0a81a8321b0e52eb1e14b3e1c2d16a086a2652501f1f4775a4,PodSandboxId:3f285aa11c740ee359b19fee27c73be7ad7d5d31387cd7af98292b51995fa3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699974132024367704,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e94d5d69871d944e272883491976489,},Annotations:map[string]string{io.kubernetes
.container.hash: 7a999a98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38842b79258e49e8204516b8e7ff6e58f6b9de2880a21cc788829ebb75edb277,PodSandboxId:6b69f36665fb3618c153ea34faeaaeb71cfff3045f50944345ff8bb2d4e57ebe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699974131709142061,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618073575d26c84596a59c7ddac9e2b1,},Annotations:map[string]string{io.kubernetes.container.h
ash: 4aacc9a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:338ab32e0745ad98e7d965f3df1e9aa735a620ca9c366af950eff3fe3ba6a5bd,PodSandboxId:b42d67be47150a71923c498a31615e80131fca8b1c8ca1bafa8309a9e5c31644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699974131527766343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b103d6782e9472dc1801b82c4447b3dd,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113244ff5079a4bca0925504598cd12ef5bc9c3c24995725c8c9b763fa5f8c3b,PodSandboxId:1bb7431da6db1d1989f4dcb6832c3716e7ac0ea4908ca34d2ed11c46777b913a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699974131452398150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc53a6a3186a398cdb1e8e8d082916a,},Annotations:map[string]string{io.kubernetes
.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a095bd59-ad49-4050-a191-de238a8abec8 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	384c0f2c1dc69       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   5 seconds ago        Running             busybox                   0                   da68c0e8c442a       busybox-5bc68d56bd-nqqlc
	64f812b98b794       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      52 seconds ago       Running             coredns                   0                   57b2559720b58       coredns-5dd5756b68-vh8ng
	5d493ccb2674e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      52 seconds ago       Running             storage-provisioner       0                   881384aa65999       storage-provisioner
	d283f68818132       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      55 seconds ago       Running             kindnet-cni               0                   7ba1d0d1c3cff       kindnet-f8xnr
	fe61ed38fc446       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      57 seconds ago       Running             kube-proxy                0                   2cda15466b77c       kube-proxy-m24mc
	7d905e8ca7334       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   3f285aa11c740       etcd-multinode-627820
	38842b79258e4       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      About a minute ago   Running             kube-apiserver            0                   6b69f36665fb3       kube-apiserver-multinode-627820
	338ab32e0745a       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      About a minute ago   Running             kube-controller-manager   0                   b42d67be47150       kube-controller-manager-multinode-627820
	113244ff5079a       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      About a minute ago   Running             kube-scheduler            0                   1bb7431da6db1       kube-scheduler-multinode-627820
	
	* 
	* ==> coredns [64f812b98b794ad95f999077faad5686097f7c7d3fe7de83d9114368b76ca659] <==
	* [INFO] 10.244.0.3:42022 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098479s
	[INFO] 10.244.1.2:49780 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171703s
	[INFO] 10.244.1.2:36737 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002070694s
	[INFO] 10.244.1.2:35928 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189689s
	[INFO] 10.244.1.2:55850 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138008s
	[INFO] 10.244.1.2:47474 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001392649s
	[INFO] 10.244.1.2:44865 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000254394s
	[INFO] 10.244.1.2:45987 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122103s
	[INFO] 10.244.1.2:43555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167647s
	[INFO] 10.244.0.3:38784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109412s
	[INFO] 10.244.0.3:54732 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086618s
	[INFO] 10.244.0.3:36040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077626s
	[INFO] 10.244.0.3:36434 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165782s
	[INFO] 10.244.1.2:53604 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144271s
	[INFO] 10.244.1.2:55042 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107932s
	[INFO] 10.244.1.2:57462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173976s
	[INFO] 10.244.1.2:39703 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083574s
	[INFO] 10.244.0.3:37631 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108036s
	[INFO] 10.244.0.3:47398 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118021s
	[INFO] 10.244.0.3:40762 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157348s
	[INFO] 10.244.0.3:40788 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010158s
	[INFO] 10.244.1.2:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180142s
	[INFO] 10.244.1.2:33865 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000083847s
	[INFO] 10.244.1.2:54418 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098442s
	[INFO] 10.244.1.2:55770 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000201098s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-627820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-627820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=multinode-627820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_02_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:02:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-627820
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 15:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 15:02:37 +0000   Tue, 14 Nov 2023 15:02:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 15:02:37 +0000   Tue, 14 Nov 2023 15:02:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 15:02:37 +0000   Tue, 14 Nov 2023 15:02:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 15:02:37 +0000   Tue, 14 Nov 2023 15:02:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    multinode-627820
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1aa3e8488b74a5fbd6d2ddab628f96f
	  System UUID:                b1aa3e84-88b7-4a5f-bd6d-2ddab628f96f
	  Boot ID:                    86c7feaf-cd51-4a08-a735-e4874c48721f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nqqlc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-vh8ng                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 etcd-multinode-627820                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-f8xnr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      59s
	  kube-system                 kube-apiserver-multinode-627820             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-multinode-627820    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-m24mc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-multinode-627820             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 57s   kube-proxy       
	  Normal  Starting                 71s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s   kubelet          Node multinode-627820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s   kubelet          Node multinode-627820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s   kubelet          Node multinode-627820 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           60s   node-controller  Node multinode-627820 event: Registered Node multinode-627820 in Controller
	  Normal  NodeReady                53s   kubelet          Node multinode-627820 status is now: NodeReady
	
	
	Name:               multinode-627820-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-627820-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:03:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-627820-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 15:03:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 15:03:21 +0000   Tue, 14 Nov 2023 15:03:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 15:03:21 +0000   Tue, 14 Nov 2023 15:03:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 15:03:21 +0000   Tue, 14 Nov 2023 15:03:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 15:03:21 +0000   Tue, 14 Nov 2023 15:03:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-627820-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 648f3503d7a5414b908e7718376c46b2
	  System UUID:                648f3503-d7a5-414b-908e-7718376c46b2
	  Boot ID:                    10b0a59d-5bfe-447f-92dc-3527bb3c3488
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-rxmbm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-2d26z               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19s
	  kube-system                 kube-proxy-6xg9v            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  NodeHasSufficientMemory  19s (x5 over 21s)  kubelet          Node multinode-627820-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x5 over 21s)  kubelet          Node multinode-627820-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x5 over 21s)  kubelet          Node multinode-627820-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16s                node-controller  Node multinode-627820-m02 event: Registered Node multinode-627820-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-627820-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Nov14 15:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068769] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.337307] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.413579] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149299] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.999174] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.089581] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.110123] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.150598] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.101854] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.205887] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Nov14 15:02] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +9.295317] systemd-fstab-generator[1256]: Ignoring "noauto" for root device
	[ +19.527942] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [7d905e8ca7334c0a81a8321b0e52eb1e14b3e1c2d16a086a2652501f1f4775a4] <==
	* {"level":"info","ts":"2023-11-14T15:02:13.941433Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T15:02:13.941224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b switched to configuration voters=(3917446624352127867)"}
	{"level":"info","ts":"2023-11-14T15:02:13.941594Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4ca65266b0923ae6","local-member-id":"365d90f3070fcb7b","added-peer-id":"365d90f3070fcb7b","added-peer-peer-urls":["https://192.168.39.63:2380"]}
	{"level":"info","ts":"2023-11-14T15:02:13.94125Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"365d90f3070fcb7b","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-11-14T15:02:14.517127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-14T15:02:14.517193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-14T15:02:14.517221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b received MsgPreVoteResp from 365d90f3070fcb7b at term 1"}
	{"level":"info","ts":"2023-11-14T15:02:14.517239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:02:14.51726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b received MsgVoteResp from 365d90f3070fcb7b at term 2"}
	{"level":"info","ts":"2023-11-14T15:02:14.517268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became leader at term 2"}
	{"level":"info","ts":"2023-11-14T15:02:14.51728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 365d90f3070fcb7b elected leader 365d90f3070fcb7b at term 2"}
	{"level":"info","ts":"2023-11-14T15:02:14.519126Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:02:14.519542Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"365d90f3070fcb7b","local-member-attributes":"{Name:multinode-627820 ClientURLs:[https://192.168.39.63:2379]}","request-path":"/0/members/365d90f3070fcb7b/attributes","cluster-id":"4ca65266b0923ae6","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:02:14.519821Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:02:14.520257Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4ca65266b0923ae6","local-member-id":"365d90f3070fcb7b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:02:14.52039Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:02:14.520443Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:02:14.520469Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:02:14.520476Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:02:14.52049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:02:14.521308Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.63:2379"}
	{"level":"info","ts":"2023-11-14T15:02:14.52154Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:02:33.060581Z","caller":"traceutil/trace.go:171","msg":"trace[41781009] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"154.349255ms","start":"2023-11-14T15:02:32.90621Z","end":"2023-11-14T15:02:33.060559Z","steps":["trace[41781009] 'process raft request'  (duration: 154.217663ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T15:02:33.068557Z","caller":"traceutil/trace.go:171","msg":"trace[21092353] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"117.868009ms","start":"2023-11-14T15:02:32.950678Z","end":"2023-11-14T15:02:33.068546Z","steps":["trace[21092353] 'process raft request'  (duration: 117.808272ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T15:03:15.998062Z","caller":"traceutil/trace.go:171","msg":"trace[2055807355] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"185.855345ms","start":"2023-11-14T15:03:15.812193Z","end":"2023-11-14T15:03:15.998048Z","steps":["trace[2055807355] 'process raft request'  (duration: 176.583924ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  15:03:31 up 1 min,  0 users,  load average: 0.93, 0.36, 0.13
	Linux multinode-627820 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [d283f6881813211f49ca99e26a8601de2d71e8981c6e9636e0439b852b5ff850] <==
	* I1114 15:02:36.249187       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1114 15:02:36.249465       1 main.go:107] hostIP = 192.168.39.63
	podIP = 192.168.39.63
	I1114 15:02:36.249829       1 main.go:116] setting mtu 1500 for CNI 
	I1114 15:02:36.249877       1 main.go:146] kindnetd IP family: "ipv4"
	I1114 15:02:36.249913       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1114 15:02:36.752840       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:02:36.752921       1 main.go:227] handling current node
	I1114 15:02:46.858755       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:02:46.858821       1 main.go:227] handling current node
	I1114 15:02:56.863711       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:02:56.863754       1 main.go:227] handling current node
	I1114 15:03:06.868124       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:03:06.868170       1 main.go:227] handling current node
	I1114 15:03:16.876961       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:03:16.877081       1 main.go:227] handling current node
	I1114 15:03:16.877092       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1114 15:03:16.877099       1 main.go:250] Node multinode-627820-m02 has CIDR [10.244.1.0/24] 
	I1114 15:03:16.877557       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.38 Flags: [] Table: 0} 
	I1114 15:03:26.889891       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:03:26.889909       1 main.go:227] handling current node
	I1114 15:03:26.889919       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1114 15:03:26.889924       1 main.go:250] Node multinode-627820-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [38842b79258e49e8204516b8e7ff6e58f6b9de2880a21cc788829ebb75edb277] <==
	* I1114 15:02:15.923360       1 controller.go:624] quota admission added evaluator for: namespaces
	I1114 15:02:15.967663       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 15:02:16.000567       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 15:02:16.000845       1 shared_informer.go:318] Caches are synced for configmaps
	I1114 15:02:16.000918       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 15:02:16.001316       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1114 15:02:16.001545       1 aggregator.go:166] initial CRD sync complete...
	I1114 15:02:16.001575       1 autoregister_controller.go:141] Starting autoregister controller
	I1114 15:02:16.001597       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1114 15:02:16.001630       1 cache.go:39] Caches are synced for autoregister controller
	I1114 15:02:16.810774       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1114 15:02:16.815855       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1114 15:02:16.815902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1114 15:02:17.406756       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 15:02:17.453475       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1114 15:02:17.537633       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1114 15:02:17.546168       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.63]
	I1114 15:02:17.547628       1 controller.go:624] quota admission added evaluator for: endpoints
	I1114 15:02:17.552503       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 15:02:17.928651       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1114 15:02:19.366345       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1114 15:02:19.380080       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1114 15:02:19.403091       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1114 15:02:31.436828       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1114 15:02:31.509327       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [338ab32e0745ad98e7d965f3df1e9aa735a620ca9c366af950eff3fe3ba6a5bd] <==
	* I1114 15:02:32.174391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="300.039µs"
	I1114 15:02:37.173848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="257.94µs"
	I1114 15:02:37.212150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.015µs"
	I1114 15:02:38.715720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.031979ms"
	I1114 15:02:38.716865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="150.78µs"
	I1114 15:02:40.784914       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1114 15:03:12.269385       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-627820-m02\" does not exist"
	I1114 15:03:12.282259       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-627820-m02" podCIDRs=["10.244.1.0/24"]
	I1114 15:03:12.299610       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6xg9v"
	I1114 15:03:12.299653       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2d26z"
	I1114 15:03:15.791568       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-627820-m02"
	I1114 15:03:15.791817       1 event.go:307] "Event occurred" object="multinode-627820-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-627820-m02 event: Registered Node multinode-627820-m02 in Controller"
	I1114 15:03:21.300759       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-627820-m02"
	I1114 15:03:23.590277       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1114 15:03:23.610613       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-rxmbm"
	I1114 15:03:23.632124       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-nqqlc"
	I1114 15:03:23.643135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.618568ms"
	I1114 15:03:23.673824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="30.58107ms"
	I1114 15:03:23.674119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="169.68µs"
	I1114 15:03:23.701074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="116.042µs"
	I1114 15:03:25.805087       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-rxmbm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-rxmbm"
	I1114 15:03:25.854471       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.307386ms"
	I1114 15:03:25.854713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="128.777µs"
	I1114 15:03:26.878830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.988588ms"
	I1114 15:03:26.878953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.405µs"
	
	* 
	* ==> kube-proxy [fe61ed38fc44640aa641eadc3529fdbb0a7c9c7295adafa9fba093f724d27134] <==
	* I1114 15:02:33.468701       1 server_others.go:69] "Using iptables proxy"
	I1114 15:02:33.484301       1 node.go:141] Successfully retrieved node IP: 192.168.39.63
	I1114 15:02:33.541944       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 15:02:33.542086       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 15:02:33.552553       1 server_others.go:152] "Using iptables Proxier"
	I1114 15:02:33.555136       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 15:02:33.555288       1 server.go:846] "Version info" version="v1.28.3"
	I1114 15:02:33.555296       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:02:33.556797       1 config.go:188] "Starting service config controller"
	I1114 15:02:33.556861       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 15:02:33.556893       1 config.go:97] "Starting endpoint slice config controller"
	I1114 15:02:33.556909       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 15:02:33.557546       1 config.go:315] "Starting node config controller"
	I1114 15:02:33.558773       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 15:02:33.658185       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 15:02:33.658336       1 shared_informer.go:318] Caches are synced for service config
	I1114 15:02:33.659713       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [113244ff5079a4bca0925504598cd12ef5bc9c3c24995725c8c9b763fa5f8c3b] <==
	* W1114 15:02:15.957567       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 15:02:15.957598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1114 15:02:15.957671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 15:02:15.957700       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1114 15:02:15.957774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:02:15.957804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 15:02:15.957872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 15:02:15.957899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 15:02:15.957955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 15:02:15.958079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1114 15:02:16.799543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 15:02:16.799604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1114 15:02:16.853334       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 15:02:16.853413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1114 15:02:16.860073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 15:02:16.860142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1114 15:02:16.925758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 15:02:16.925912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1114 15:02:16.979484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 15:02:16.979581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1114 15:02:17.042770       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 15:02:17.042856       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 15:02:17.083329       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 15:02:17.083428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1114 15:02:19.846165       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:01:46 UTC, ends at Tue 2023-11-14 15:03:31 UTC. --
	Nov 14 15:02:31 multinode-627820 kubelet[1263]: I1114 15:02:31.516784    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73a6d4c8-2f95-4818-bc62-566099466b42-xtables-lock\") pod \"kube-proxy-m24mc\" (UID: \"73a6d4c8-2f95-4818-bc62-566099466b42\") " pod="kube-system/kube-proxy-m24mc"
	Nov 14 15:02:31 multinode-627820 kubelet[1263]: I1114 15:02:31.516801    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73a6d4c8-2f95-4818-bc62-566099466b42-lib-modules\") pod \"kube-proxy-m24mc\" (UID: \"73a6d4c8-2f95-4818-bc62-566099466b42\") " pod="kube-system/kube-proxy-m24mc"
	Nov 14 15:02:31 multinode-627820 kubelet[1263]: E1114 15:02:31.628789    1263 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 14 15:02:31 multinode-627820 kubelet[1263]: E1114 15:02:31.628852    1263 projected.go:198] Error preparing data for projected volume kube-api-access-78sqx for pod kube-system/kindnet-f8xnr: configmap "kube-root-ca.crt" not found
	Nov 14 15:02:31 multinode-627820 kubelet[1263]: E1114 15:02:31.629074    1263 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/457f993f-4895-488a-8277-d5187afda5d3-kube-api-access-78sqx podName:457f993f-4895-488a-8277-d5187afda5d3 nodeName:}" failed. No retries permitted until 2023-11-14 15:02:32.12893034 +0000 UTC m=+12.780609762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-78sqx" (UniqueName: "kubernetes.io/projected/457f993f-4895-488a-8277-d5187afda5d3-kube-api-access-78sqx") pod "kindnet-f8xnr" (UID: "457f993f-4895-488a-8277-d5187afda5d3") : configmap "kube-root-ca.crt" not found
	Nov 14 15:02:31 multinode-627820 kubelet[1263]: E1114 15:02:31.630212    1263 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 14 15:02:31 multinode-627820 kubelet[1263]: E1114 15:02:31.630228    1263 projected.go:198] Error preparing data for projected volume kube-api-access-9rnnb for pod kube-system/kube-proxy-m24mc: configmap "kube-root-ca.crt" not found
	Nov 14 15:02:31 multinode-627820 kubelet[1263]: E1114 15:02:31.630263    1263 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73a6d4c8-2f95-4818-bc62-566099466b42-kube-api-access-9rnnb podName:73a6d4c8-2f95-4818-bc62-566099466b42 nodeName:}" failed. No retries permitted until 2023-11-14 15:02:32.130251178 +0000 UTC m=+12.781930612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9rnnb" (UniqueName: "kubernetes.io/projected/73a6d4c8-2f95-4818-bc62-566099466b42-kube-api-access-9rnnb") pod "kube-proxy-m24mc" (UID: "73a6d4c8-2f95-4818-bc62-566099466b42") : configmap "kube-root-ca.crt" not found
	Nov 14 15:02:36 multinode-627820 kubelet[1263]: I1114 15:02:36.674235    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m24mc" podStartSLOduration=5.674173886 podCreationTimestamp="2023-11-14 15:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-14 15:02:33.674928601 +0000 UTC m=+14.326608044" watchObservedRunningTime="2023-11-14 15:02:36.674173886 +0000 UTC m=+17.325853329"
	Nov 14 15:02:37 multinode-627820 kubelet[1263]: I1114 15:02:37.132931    1263 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 14 15:02:37 multinode-627820 kubelet[1263]: I1114 15:02:37.171361    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-f8xnr" podStartSLOduration=6.171327284 podCreationTimestamp="2023-11-14 15:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-14 15:02:36.674454 +0000 UTC m=+17.326133443" watchObservedRunningTime="2023-11-14 15:02:37.171327284 +0000 UTC m=+17.823006726"
	Nov 14 15:02:37 multinode-627820 kubelet[1263]: I1114 15:02:37.171639    1263 topology_manager.go:215] "Topology Admit Handler" podUID="25afe3b4-014e-4180-9597-fb237d622c81" podNamespace="kube-system" podName="coredns-5dd5756b68-vh8ng"
	Nov 14 15:02:37 multinode-627820 kubelet[1263]: I1114 15:02:37.174799    1263 topology_manager.go:215] "Topology Admit Handler" podUID="f9cf343d-66fc-4de5-b0e0-df38ace21868" podNamespace="kube-system" podName="storage-provisioner"
	Nov 14 15:02:37 multinode-627820 kubelet[1263]: I1114 15:02:37.254179    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25afe3b4-014e-4180-9597-fb237d622c81-config-volume\") pod \"coredns-5dd5756b68-vh8ng\" (UID: \"25afe3b4-014e-4180-9597-fb237d622c81\") " pod="kube-system/coredns-5dd5756b68-vh8ng"
	Nov 14 15:02:37 multinode-627820 kubelet[1263]: I1114 15:02:37.254389    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f9cf343d-66fc-4de5-b0e0-df38ace21868-tmp\") pod \"storage-provisioner\" (UID: \"f9cf343d-66fc-4de5-b0e0-df38ace21868\") " pod="kube-system/storage-provisioner"
	Nov 14 15:02:37 multinode-627820 kubelet[1263]: I1114 15:02:37.254508    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6wcm\" (UniqueName: \"kubernetes.io/projected/f9cf343d-66fc-4de5-b0e0-df38ace21868-kube-api-access-r6wcm\") pod \"storage-provisioner\" (UID: \"f9cf343d-66fc-4de5-b0e0-df38ace21868\") " pod="kube-system/storage-provisioner"
	Nov 14 15:02:37 multinode-627820 kubelet[1263]: I1114 15:02:37.254541    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9nd\" (UniqueName: \"kubernetes.io/projected/25afe3b4-014e-4180-9597-fb237d622c81-kube-api-access-fs9nd\") pod \"coredns-5dd5756b68-vh8ng\" (UID: \"25afe3b4-014e-4180-9597-fb237d622c81\") " pod="kube-system/coredns-5dd5756b68-vh8ng"
	Nov 14 15:02:38 multinode-627820 kubelet[1263]: I1114 15:02:38.698598    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.698564308 podCreationTimestamp="2023-11-14 15:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-14 15:02:38.68452415 +0000 UTC m=+19.336203592" watchObservedRunningTime="2023-11-14 15:02:38.698564308 +0000 UTC m=+19.350243784"
	Nov 14 15:03:19 multinode-627820 kubelet[1263]: E1114 15:03:19.606533    1263 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 15:03:19 multinode-627820 kubelet[1263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 15:03:19 multinode-627820 kubelet[1263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 15:03:19 multinode-627820 kubelet[1263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 15:03:23 multinode-627820 kubelet[1263]: I1114 15:03:23.649470    1263 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vh8ng" podStartSLOduration=52.64940319 podCreationTimestamp="2023-11-14 15:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-14 15:02:38.699123594 +0000 UTC m=+19.350803036" watchObservedRunningTime="2023-11-14 15:03:23.64940319 +0000 UTC m=+64.301082675"
	Nov 14 15:03:23 multinode-627820 kubelet[1263]: I1114 15:03:23.649940    1263 topology_manager.go:215] "Topology Admit Handler" podUID="e733c69a-d862-453f-9b5b-c634e5adc2e8" podNamespace="default" podName="busybox-5bc68d56bd-nqqlc"
	Nov 14 15:03:23 multinode-627820 kubelet[1263]: I1114 15:03:23.721352    1263 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xknvd\" (UniqueName: \"kubernetes.io/projected/e733c69a-d862-453f-9b5b-c634e5adc2e8-kube-api-access-xknvd\") pod \"busybox-5bc68d56bd-nqqlc\" (UID: \"e733c69a-d862-453f-9b5b-c634e5adc2e8\") " pod="default/busybox-5bc68d56bd-nqqlc"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-627820 -n multinode-627820
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-627820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (690.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-627820
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-627820
E1114 15:06:27.622998  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:06:34.577560  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-627820: exit status 82 (2m1.348472765s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-627820"  ...
	* Stopping node "multinode-627820"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-627820" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-627820 --wait=true -v=8 --alsologtostderr
E1114 15:07:57.626444  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 15:08:52.668955  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 15:11:27.620955  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:11:34.576942  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 15:12:50.670510  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:13:52.668982  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 15:15:15.712989  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-627820 --wait=true -v=8 --alsologtostderr: (9m25.746790268s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-627820
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-627820 -n multinode-627820
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-627820 logs -n 25: (1.618092245s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-627820 ssh -n                                                                 | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-627820 cp multinode-627820-m02:/home/docker/cp-test.txt                       | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2006925696/001/cp-test_multinode-627820-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n                                                                 | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-627820 cp multinode-627820-m02:/home/docker/cp-test.txt                       | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820:/home/docker/cp-test_multinode-627820-m02_multinode-627820.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n                                                                 | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n multinode-627820 sudo cat                                       | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | /home/docker/cp-test_multinode-627820-m02_multinode-627820.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-627820 cp multinode-627820-m02:/home/docker/cp-test.txt                       | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m03:/home/docker/cp-test_multinode-627820-m02_multinode-627820-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n                                                                 | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n multinode-627820-m03 sudo cat                                   | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | /home/docker/cp-test_multinode-627820-m02_multinode-627820-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-627820 cp testdata/cp-test.txt                                                | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n                                                                 | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-627820 cp multinode-627820-m03:/home/docker/cp-test.txt                       | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2006925696/001/cp-test_multinode-627820-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n                                                                 | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-627820 cp multinode-627820-m03:/home/docker/cp-test.txt                       | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820:/home/docker/cp-test_multinode-627820-m03_multinode-627820.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n                                                                 | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n multinode-627820 sudo cat                                       | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | /home/docker/cp-test_multinode-627820-m03_multinode-627820.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-627820 cp multinode-627820-m03:/home/docker/cp-test.txt                       | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m02:/home/docker/cp-test_multinode-627820-m03_multinode-627820-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n                                                                 | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | multinode-627820-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-627820 ssh -n multinode-627820-m02 sudo cat                                   | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | /home/docker/cp-test_multinode-627820-m03_multinode-627820-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-627820 node stop m03                                                          | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	| node    | multinode-627820 node start                                                             | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC | 14 Nov 23 15:04 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-627820                                                                | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC |                     |
	| stop    | -p multinode-627820                                                                     | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:04 UTC |                     |
	| start   | -p multinode-627820                                                                     | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:06 UTC | 14 Nov 23 15:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-627820                                                                | multinode-627820 | jenkins | v1.32.0 | 14 Nov 23 15:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:06:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:06:55.541509  847956 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:06:55.541650  847956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:06:55.541660  847956 out.go:309] Setting ErrFile to fd 2...
	I1114 15:06:55.541665  847956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:06:55.541836  847956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:06:55.542401  847956 out.go:303] Setting JSON to false
	I1114 15:06:55.543444  847956 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":42567,"bootTime":1699931848,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:06:55.543503  847956 start.go:138] virtualization: kvm guest
	I1114 15:06:55.546218  847956 out.go:177] * [multinode-627820] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:06:55.547841  847956 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:06:55.547850  847956 notify.go:220] Checking for updates...
	I1114 15:06:55.549396  847956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:06:55.550959  847956 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:06:55.552497  847956 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:06:55.554003  847956 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:06:55.555548  847956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:06:55.557582  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:06:55.557702  847956 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:06:55.558420  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:06:55.558510  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:06:55.573540  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I1114 15:06:55.573949  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:06:55.574574  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:06:55.574603  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:06:55.575032  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:06:55.575251  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:06:55.612159  847956 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:06:55.613573  847956 start.go:298] selected driver: kvm2
	I1114 15:06:55.613592  847956 start.go:902] validating driver "kvm2" against &{Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:06:55.613757  847956 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:06:55.614101  847956 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:06:55.614231  847956 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:06:55.628600  847956 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:06:55.629443  847956 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:06:55.629535  847956 cni.go:84] Creating CNI manager for ""
	I1114 15:06:55.629552  847956 cni.go:136] 3 nodes found, recommending kindnet
	I1114 15:06:55.629562  847956 start_flags.go:323] config:
	{Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:06:55.629870  847956 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:06:55.632509  847956 out.go:177] * Starting control plane node multinode-627820 in cluster multinode-627820
	I1114 15:06:55.633940  847956 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:06:55.633975  847956 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:06:55.633992  847956 cache.go:56] Caching tarball of preloaded images
	I1114 15:06:55.634090  847956 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:06:55.634104  847956 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:06:55.634272  847956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:06:55.634508  847956 start.go:365] acquiring machines lock for multinode-627820: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:06:55.634566  847956 start.go:369] acquired machines lock for "multinode-627820" in 38.424µs
	I1114 15:06:55.634587  847956 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:06:55.634594  847956 fix.go:54] fixHost starting: 
	I1114 15:06:55.634880  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:06:55.634923  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:06:55.647959  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I1114 15:06:55.648387  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:06:55.648912  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:06:55.648938  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:06:55.649330  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:06:55.649550  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:06:55.649730  847956 main.go:141] libmachine: (multinode-627820) Calling .GetState
	I1114 15:06:55.651189  847956 fix.go:102] recreateIfNeeded on multinode-627820: state=Running err=<nil>
	W1114 15:06:55.651230  847956 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:06:55.653205  847956 out.go:177] * Updating the running kvm2 "multinode-627820" VM ...
	I1114 15:06:55.654577  847956 machine.go:88] provisioning docker machine ...
	I1114 15:06:55.654604  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:06:55.654819  847956 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:06:55.655009  847956 buildroot.go:166] provisioning hostname "multinode-627820"
	I1114 15:06:55.655037  847956 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:06:55.655182  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:06:55.657862  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:06:55.658359  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:06:55.658398  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:06:55.658527  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:06:55.658688  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:06:55.658879  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:06:55.659047  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:06:55.659201  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:06:55.659557  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:06:55.659575  847956 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-627820 && echo "multinode-627820" | sudo tee /etc/hostname
	I1114 15:07:14.156973  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:20.237087  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:23.309145  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:29.389066  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:32.461009  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:38.541083  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:41.613038  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:47.693079  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:50.765028  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:56.845058  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:07:59.917003  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:05.997034  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:09.068982  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:15.149113  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:18.221087  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:24.301011  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:27.373071  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:33.453173  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:36.525027  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:42.605156  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:45.677004  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:51.757106  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:08:54.829097  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:00.913128  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:03.981092  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:10.061104  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:13.133089  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:19.213117  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:22.285031  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:28.365080  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:31.437095  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:37.517052  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:40.589065  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:46.669123  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:49.741004  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:55.821042  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:09:58.892976  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:04.973058  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:08.045092  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:14.125081  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:17.197092  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:23.277073  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:26.349069  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:32.429049  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:35.501084  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:41.581056  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:44.653077  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:50.733031  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:53.805055  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:10:59.885105  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:02.957053  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:09.037153  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:12.109068  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:18.189066  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:21.261022  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:27.341050  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:30.413077  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:36.493089  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:39.565062  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:45.645015  847956 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.63:22: connect: no route to host
	I1114 15:11:48.647871  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:11:48.647907  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:11:48.649911  847956 machine.go:91] provisioned docker machine in 4m52.995309151s
	I1114 15:11:48.649969  847956 fix.go:56] fixHost completed within 4m53.015375503s
	I1114 15:11:48.649980  847956 start.go:83] releasing machines lock for "multinode-627820", held for 4m53.015402184s
	W1114 15:11:48.650006  847956 start.go:691] error starting host: provision: host is not running
	W1114 15:11:48.650169  847956 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1114 15:11:48.650183  847956 start.go:706] Will try again in 5 seconds ...
	I1114 15:11:53.650432  847956 start.go:365] acquiring machines lock for multinode-627820: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:11:53.650591  847956 start.go:369] acquired machines lock for "multinode-627820" in 100.755µs
	I1114 15:11:53.650650  847956 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:11:53.650664  847956 fix.go:54] fixHost starting: 
	I1114 15:11:53.651054  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:11:53.651086  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:11:53.666678  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I1114 15:11:53.667189  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:11:53.667836  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:11:53.667863  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:11:53.668284  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:11:53.668515  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:11:53.668681  847956 main.go:141] libmachine: (multinode-627820) Calling .GetState
	I1114 15:11:53.670535  847956 fix.go:102] recreateIfNeeded on multinode-627820: state=Stopped err=<nil>
	I1114 15:11:53.670557  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	W1114 15:11:53.670745  847956 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:11:53.674417  847956 out.go:177] * Restarting existing kvm2 VM for "multinode-627820" ...
	I1114 15:11:53.676165  847956 main.go:141] libmachine: (multinode-627820) Calling .Start
	I1114 15:11:53.676367  847956 main.go:141] libmachine: (multinode-627820) Ensuring networks are active...
	I1114 15:11:53.677436  847956 main.go:141] libmachine: (multinode-627820) Ensuring network default is active
	I1114 15:11:53.677863  847956 main.go:141] libmachine: (multinode-627820) Ensuring network mk-multinode-627820 is active
	I1114 15:11:53.678340  847956 main.go:141] libmachine: (multinode-627820) Getting domain xml...
	I1114 15:11:53.679006  847956 main.go:141] libmachine: (multinode-627820) Creating domain...
	I1114 15:11:54.910837  847956 main.go:141] libmachine: (multinode-627820) Waiting to get IP...
	I1114 15:11:54.911998  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:11:54.912660  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:11:54.912796  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:11:54.912640  848757 retry.go:31] will retry after 202.450808ms: waiting for machine to come up
	I1114 15:11:55.117380  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:11:55.117924  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:11:55.117997  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:11:55.117841  848757 retry.go:31] will retry after 308.178989ms: waiting for machine to come up
	I1114 15:11:55.427601  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:11:55.428021  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:11:55.428057  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:11:55.428001  848757 retry.go:31] will retry after 468.191815ms: waiting for machine to come up
	I1114 15:11:55.897805  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:11:55.898239  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:11:55.898284  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:11:55.898184  848757 retry.go:31] will retry after 530.699349ms: waiting for machine to come up
	I1114 15:11:56.430997  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:11:56.431569  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:11:56.431619  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:11:56.431512  848757 retry.go:31] will retry after 703.976462ms: waiting for machine to come up
	I1114 15:11:57.137755  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:11:57.138162  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:11:57.138231  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:11:57.138154  848757 retry.go:31] will retry after 942.329085ms: waiting for machine to come up
	I1114 15:11:58.082377  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:11:58.082788  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:11:58.082813  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:11:58.082736  848757 retry.go:31] will retry after 980.2211ms: waiting for machine to come up
	I1114 15:11:59.064265  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:11:59.064660  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:11:59.064696  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:11:59.064613  848757 retry.go:31] will retry after 1.383572036s: waiting for machine to come up
	I1114 15:12:00.450156  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:00.450684  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:12:00.450717  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:12:00.450627  848757 retry.go:31] will retry after 1.58595187s: waiting for machine to come up
	I1114 15:12:02.037893  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:02.038389  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:12:02.038424  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:12:02.038325  848757 retry.go:31] will retry after 2.166707334s: waiting for machine to come up
	I1114 15:12:04.206957  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:04.207525  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:12:04.207564  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:12:04.207485  848757 retry.go:31] will retry after 2.482033604s: waiting for machine to come up
	I1114 15:12:06.693382  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:06.693943  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:12:06.693975  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:12:06.693898  848757 retry.go:31] will retry after 2.914454151s: waiting for machine to come up
	I1114 15:12:09.610476  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:09.610977  847956 main.go:141] libmachine: (multinode-627820) DBG | unable to find current IP address of domain multinode-627820 in network mk-multinode-627820
	I1114 15:12:09.611007  847956 main.go:141] libmachine: (multinode-627820) DBG | I1114 15:12:09.610905  848757 retry.go:31] will retry after 3.833967273s: waiting for machine to come up
	I1114 15:12:13.448947  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.449459  847956 main.go:141] libmachine: (multinode-627820) Found IP for machine: 192.168.39.63
	I1114 15:12:13.449496  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has current primary IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.449510  847956 main.go:141] libmachine: (multinode-627820) Reserving static IP address...
	I1114 15:12:13.450030  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "multinode-627820", mac: "52:54:00:c4:37:2e", ip: "192.168.39.63"} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:13.450065  847956 main.go:141] libmachine: (multinode-627820) Reserved static IP address: 192.168.39.63
	I1114 15:12:13.450100  847956 main.go:141] libmachine: (multinode-627820) DBG | skip adding static IP to network mk-multinode-627820 - found existing host DHCP lease matching {name: "multinode-627820", mac: "52:54:00:c4:37:2e", ip: "192.168.39.63"}
	I1114 15:12:13.450116  847956 main.go:141] libmachine: (multinode-627820) Waiting for SSH to be available...
	I1114 15:12:13.450133  847956 main.go:141] libmachine: (multinode-627820) DBG | Getting to WaitForSSH function...
	I1114 15:12:13.452455  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.452854  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:13.452884  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.453075  847956 main.go:141] libmachine: (multinode-627820) DBG | Using SSH client type: external
	I1114 15:12:13.453103  847956 main.go:141] libmachine: (multinode-627820) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa (-rw-------)
	I1114 15:12:13.453156  847956 main.go:141] libmachine: (multinode-627820) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:12:13.453179  847956 main.go:141] libmachine: (multinode-627820) DBG | About to run SSH command:
	I1114 15:12:13.453195  847956 main.go:141] libmachine: (multinode-627820) DBG | exit 0
	I1114 15:12:13.548561  847956 main.go:141] libmachine: (multinode-627820) DBG | SSH cmd err, output: <nil>: 
	I1114 15:12:13.549049  847956 main.go:141] libmachine: (multinode-627820) Calling .GetConfigRaw
	I1114 15:12:13.549821  847956 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:12:13.552447  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.552867  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:13.552913  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.553136  847956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:12:13.553338  847956 machine.go:88] provisioning docker machine ...
	I1114 15:12:13.553356  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:12:13.553608  847956 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:12:13.553798  847956 buildroot.go:166] provisioning hostname "multinode-627820"
	I1114 15:12:13.553829  847956 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:12:13.554028  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:13.556120  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.556492  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:13.556523  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.556612  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:12:13.556802  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:13.556959  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:13.557073  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:12:13.557216  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:12:13.557593  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:12:13.557606  847956 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-627820 && echo "multinode-627820" | sudo tee /etc/hostname
	I1114 15:12:13.704153  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-627820
	
	I1114 15:12:13.704208  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:13.707355  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.707772  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:13.707817  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.707962  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:12:13.708181  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:13.708401  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:13.708639  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:12:13.708922  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:12:13.709268  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:12:13.709286  847956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-627820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-627820/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-627820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:12:13.851967  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:12:13.852012  847956 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:12:13.852041  847956 buildroot.go:174] setting up certificates
	I1114 15:12:13.852066  847956 provision.go:83] configureAuth start
	I1114 15:12:13.852089  847956 main.go:141] libmachine: (multinode-627820) Calling .GetMachineName
	I1114 15:12:13.852443  847956 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:12:13.855324  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.855705  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:13.855739  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.855945  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:13.858637  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.859015  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:13.859061  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:13.859174  847956 provision.go:138] copyHostCerts
	I1114 15:12:13.859225  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:12:13.859269  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:12:13.859303  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:12:13.859387  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:12:13.859520  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:12:13.859545  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:12:13.859552  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:12:13.859581  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:12:13.859628  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:12:13.859644  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:12:13.859650  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:12:13.859670  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:12:13.859715  847956 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.multinode-627820 san=[192.168.39.63 192.168.39.63 localhost 127.0.0.1 minikube multinode-627820]
	I1114 15:12:14.009192  847956 provision.go:172] copyRemoteCerts
	I1114 15:12:14.009274  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:12:14.009303  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:14.012442  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.012896  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:14.012943  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.013135  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:12:14.013377  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:14.013567  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:12:14.013715  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:12:14.109754  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 15:12:14.109843  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1114 15:12:14.133224  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 15:12:14.133285  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:12:14.155940  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 15:12:14.156004  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:12:14.177782  847956 provision.go:86] duration metric: configureAuth took 325.69379ms
	I1114 15:12:14.177814  847956 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:12:14.178089  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:12:14.178208  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:14.181078  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.181562  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:14.181593  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.181761  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:12:14.182004  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:14.182230  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:14.182402  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:12:14.182583  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:12:14.182972  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:12:14.182995  847956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:12:14.503647  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:12:14.503696  847956 machine.go:91] provisioned docker machine in 950.343107ms
	I1114 15:12:14.503709  847956 start.go:300] post-start starting for "multinode-627820" (driver="kvm2")
	I1114 15:12:14.503719  847956 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:12:14.503739  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:12:14.504178  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:12:14.504217  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:14.507179  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.507596  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:14.507629  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.507836  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:12:14.508045  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:14.508242  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:12:14.508388  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:12:14.602965  847956 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:12:14.607343  847956 command_runner.go:130] > NAME=Buildroot
	I1114 15:12:14.607361  847956 command_runner.go:130] > VERSION=2021.02.12-1-g9cb9327-dirty
	I1114 15:12:14.607365  847956 command_runner.go:130] > ID=buildroot
	I1114 15:12:14.607370  847956 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 15:12:14.607377  847956 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 15:12:14.607441  847956 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:12:14.607455  847956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:12:14.607532  847956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:12:14.607634  847956 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:12:14.607646  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /etc/ssl/certs/8322112.pem
	I1114 15:12:14.607763  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:12:14.616825  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:12:14.637856  847956 start.go:303] post-start completed in 134.125626ms
	I1114 15:12:14.637903  847956 fix.go:56] fixHost completed within 20.987230602s
	I1114 15:12:14.637927  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:14.640438  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.640940  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:14.640973  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.641111  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:12:14.641329  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:14.641553  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:14.641675  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:12:14.641836  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:12:14.642172  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I1114 15:12:14.642184  847956 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:12:14.773370  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699974734.722559958
	
	I1114 15:12:14.773400  847956 fix.go:206] guest clock: 1699974734.722559958
	I1114 15:12:14.773408  847956 fix.go:219] Guest: 2023-11-14 15:12:14.722559958 +0000 UTC Remote: 2023-11-14 15:12:14.637907808 +0000 UTC m=+319.151768289 (delta=84.65215ms)
	I1114 15:12:14.773499  847956 fix.go:190] guest clock delta is within tolerance: 84.65215ms
	I1114 15:12:14.773525  847956 start.go:83] releasing machines lock for "multinode-627820", held for 21.12291752s
	I1114 15:12:14.773564  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:12:14.773854  847956 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:12:14.776588  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.776989  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:14.777013  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.777194  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:12:14.777705  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:12:14.777952  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:12:14.778064  847956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:12:14.778119  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:14.778227  847956 ssh_runner.go:195] Run: cat /version.json
	I1114 15:12:14.778266  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:12:14.780835  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.780885  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.781189  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:14.781219  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.781244  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:14.781260  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:14.781312  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:12:14.781488  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:14.781544  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:12:14.781643  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:12:14.781733  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:12:14.781747  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:12:14.781880  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:12:14.782023  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:12:14.897130  847956 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 15:12:14.897978  847956 command_runner.go:130] > {"iso_version": "v1.32.1-1699485311-17565", "kicbase_version": "v0.0.42", "minikube_version": "v1.32.0", "commit": "ac8620e02dd92b447e2556d107d7751e3faf21d2"}
	I1114 15:12:14.898144  847956 ssh_runner.go:195] Run: systemctl --version
	I1114 15:12:14.903791  847956 command_runner.go:130] > systemd 247 (247)
	I1114 15:12:14.903836  847956 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1114 15:12:14.903894  847956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:12:15.047755  847956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 15:12:15.053494  847956 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 15:12:15.053597  847956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:12:15.053673  847956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:12:15.067088  847956 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1114 15:12:15.067153  847956 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:12:15.067170  847956 start.go:472] detecting cgroup driver to use...
	I1114 15:12:15.067281  847956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:12:15.080624  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:12:15.092504  847956 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:12:15.092569  847956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:12:15.104542  847956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:12:15.116619  847956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:12:15.217735  847956 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1114 15:12:15.217823  847956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:12:15.231326  847956 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1114 15:12:15.335865  847956 docker.go:219] disabling docker service ...
	I1114 15:12:15.335934  847956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:12:15.349407  847956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:12:15.360622  847956 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1114 15:12:15.360692  847956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:12:15.459348  847956 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1114 15:12:15.459469  847956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:12:15.472456  847956 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1114 15:12:15.472844  847956 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1114 15:12:15.558282  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:12:15.570129  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:12:15.587026  847956 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1114 15:12:15.587124  847956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:12:15.587186  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:12:15.595677  847956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:12:15.595748  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:12:15.604279  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:12:15.613506  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:12:15.622015  847956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:12:15.631817  847956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:12:15.639577  847956 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:12:15.639614  847956 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:12:15.639652  847956 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:12:15.651232  847956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:12:15.659073  847956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:12:15.766802  847956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:12:15.926137  847956 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:12:15.926252  847956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:12:15.930861  847956 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1114 15:12:15.930888  847956 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 15:12:15.930902  847956 command_runner.go:130] > Device: 16h/22d	Inode: 777         Links: 1
	I1114 15:12:15.930914  847956 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:12:15.930923  847956 command_runner.go:130] > Access: 2023-11-14 15:12:15.861117816 +0000
	I1114 15:12:15.930936  847956 command_runner.go:130] > Modify: 2023-11-14 15:12:15.861117816 +0000
	I1114 15:12:15.930945  847956 command_runner.go:130] > Change: 2023-11-14 15:12:15.861117816 +0000
	I1114 15:12:15.930951  847956 command_runner.go:130] >  Birth: -
	I1114 15:12:15.930974  847956 start.go:540] Will wait 60s for crictl version
	I1114 15:12:15.931021  847956 ssh_runner.go:195] Run: which crictl
	I1114 15:12:15.936424  847956 command_runner.go:130] > /usr/bin/crictl
	I1114 15:12:15.936495  847956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:12:15.970705  847956 command_runner.go:130] > Version:  0.1.0
	I1114 15:12:15.970725  847956 command_runner.go:130] > RuntimeName:  cri-o
	I1114 15:12:15.970730  847956 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1114 15:12:15.970734  847956 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 15:12:15.971014  847956 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:12:15.971119  847956 ssh_runner.go:195] Run: crio --version
	I1114 15:12:16.014275  847956 command_runner.go:130] > crio version 1.24.1
	I1114 15:12:16.014304  847956 command_runner.go:130] > Version:          1.24.1
	I1114 15:12:16.014311  847956 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:12:16.014316  847956 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:12:16.014321  847956 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:12:16.014326  847956 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:12:16.014330  847956 command_runner.go:130] > Compiler:         gc
	I1114 15:12:16.014334  847956 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:12:16.014339  847956 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:12:16.014346  847956 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:12:16.014362  847956 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:12:16.014370  847956 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:12:16.014459  847956 ssh_runner.go:195] Run: crio --version
	I1114 15:12:16.060372  847956 command_runner.go:130] > crio version 1.24.1
	I1114 15:12:16.060396  847956 command_runner.go:130] > Version:          1.24.1
	I1114 15:12:16.060404  847956 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:12:16.060408  847956 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:12:16.060424  847956 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:12:16.060430  847956 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:12:16.060434  847956 command_runner.go:130] > Compiler:         gc
	I1114 15:12:16.060439  847956 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:12:16.060444  847956 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:12:16.060452  847956 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:12:16.060459  847956 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:12:16.060463  847956 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:12:16.063604  847956 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:12:16.065014  847956 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:12:16.067592  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:16.067970  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:12:16.068006  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:12:16.068171  847956 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:12:16.072152  847956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:12:16.085103  847956 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:12:16.085176  847956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:12:16.124402  847956 command_runner.go:130] > {
	I1114 15:12:16.124425  847956 command_runner.go:130] >   "images": [
	I1114 15:12:16.124429  847956 command_runner.go:130] >     {
	I1114 15:12:16.124437  847956 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1114 15:12:16.124442  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:16.124448  847956 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1114 15:12:16.124452  847956 command_runner.go:130] >       ],
	I1114 15:12:16.124456  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:16.124464  847956 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1114 15:12:16.124474  847956 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1114 15:12:16.124478  847956 command_runner.go:130] >       ],
	I1114 15:12:16.124482  847956 command_runner.go:130] >       "size": "750414",
	I1114 15:12:16.124486  847956 command_runner.go:130] >       "uid": {
	I1114 15:12:16.124490  847956 command_runner.go:130] >         "value": "65535"
	I1114 15:12:16.124494  847956 command_runner.go:130] >       },
	I1114 15:12:16.124498  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:16.124510  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:16.124515  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:16.124522  847956 command_runner.go:130] >     }
	I1114 15:12:16.124526  847956 command_runner.go:130] >   ]
	I1114 15:12:16.124531  847956 command_runner.go:130] > }
	I1114 15:12:16.124681  847956 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:12:16.124732  847956 ssh_runner.go:195] Run: which lz4
	I1114 15:12:16.128374  847956 command_runner.go:130] > /usr/bin/lz4
	I1114 15:12:16.128400  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1114 15:12:16.128474  847956 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:12:16.132330  847956 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:12:16.132379  847956 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:12:16.132400  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:12:17.880623  847956 crio.go:444] Took 1.752176 seconds to copy over tarball
	I1114 15:12:17.880715  847956 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:12:20.973148  847956 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.092401163s)
	I1114 15:12:20.973255  847956 crio.go:451] Took 3.092595 seconds to extract the tarball
	I1114 15:12:20.973286  847956 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:12:21.012808  847956 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:12:21.060400  847956 command_runner.go:130] > {
	I1114 15:12:21.060422  847956 command_runner.go:130] >   "images": [
	I1114 15:12:21.060426  847956 command_runner.go:130] >     {
	I1114 15:12:21.060438  847956 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1114 15:12:21.060462  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.060473  847956 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1114 15:12:21.060478  847956 command_runner.go:130] >       ],
	I1114 15:12:21.060483  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.060491  847956 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1114 15:12:21.060498  847956 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1114 15:12:21.060505  847956 command_runner.go:130] >       ],
	I1114 15:12:21.060510  847956 command_runner.go:130] >       "size": "65258016",
	I1114 15:12:21.060515  847956 command_runner.go:130] >       "uid": null,
	I1114 15:12:21.060523  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.060540  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.060556  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.060566  847956 command_runner.go:130] >     },
	I1114 15:12:21.060572  847956 command_runner.go:130] >     {
	I1114 15:12:21.060584  847956 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1114 15:12:21.060588  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.060596  847956 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1114 15:12:21.060599  847956 command_runner.go:130] >       ],
	I1114 15:12:21.060609  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.060625  847956 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1114 15:12:21.060642  847956 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1114 15:12:21.060651  847956 command_runner.go:130] >       ],
	I1114 15:12:21.060666  847956 command_runner.go:130] >       "size": "31470524",
	I1114 15:12:21.060675  847956 command_runner.go:130] >       "uid": null,
	I1114 15:12:21.060681  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.060687  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.060695  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.060704  847956 command_runner.go:130] >     },
	I1114 15:12:21.060718  847956 command_runner.go:130] >     {
	I1114 15:12:21.060732  847956 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1114 15:12:21.060757  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.060770  847956 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1114 15:12:21.060776  847956 command_runner.go:130] >       ],
	I1114 15:12:21.060786  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.060800  847956 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1114 15:12:21.060813  847956 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1114 15:12:21.060821  847956 command_runner.go:130] >       ],
	I1114 15:12:21.060828  847956 command_runner.go:130] >       "size": "53621675",
	I1114 15:12:21.060839  847956 command_runner.go:130] >       "uid": null,
	I1114 15:12:21.060849  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.060859  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.060867  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.060881  847956 command_runner.go:130] >     },
	I1114 15:12:21.060890  847956 command_runner.go:130] >     {
	I1114 15:12:21.060900  847956 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1114 15:12:21.060908  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.060924  847956 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1114 15:12:21.060934  847956 command_runner.go:130] >       ],
	I1114 15:12:21.060943  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.060956  847956 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1114 15:12:21.060971  847956 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1114 15:12:21.060987  847956 command_runner.go:130] >       ],
	I1114 15:12:21.060997  847956 command_runner.go:130] >       "size": "295456551",
	I1114 15:12:21.061005  847956 command_runner.go:130] >       "uid": {
	I1114 15:12:21.061012  847956 command_runner.go:130] >         "value": "0"
	I1114 15:12:21.061022  847956 command_runner.go:130] >       },
	I1114 15:12:21.061029  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.061039  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.061046  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.061055  847956 command_runner.go:130] >     },
	I1114 15:12:21.061062  847956 command_runner.go:130] >     {
	I1114 15:12:21.061071  847956 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1114 15:12:21.061076  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.061086  847956 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1114 15:12:21.061100  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061111  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.061123  847956 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1114 15:12:21.061139  847956 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1114 15:12:21.061148  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061154  847956 command_runner.go:130] >       "size": "127165392",
	I1114 15:12:21.061161  847956 command_runner.go:130] >       "uid": {
	I1114 15:12:21.061167  847956 command_runner.go:130] >         "value": "0"
	I1114 15:12:21.061175  847956 command_runner.go:130] >       },
	I1114 15:12:21.061183  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.061197  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.061206  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.061213  847956 command_runner.go:130] >     },
	I1114 15:12:21.061222  847956 command_runner.go:130] >     {
	I1114 15:12:21.061235  847956 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1114 15:12:21.061244  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.061252  847956 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1114 15:12:21.061258  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061272  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.061288  847956 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1114 15:12:21.061304  847956 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1114 15:12:21.061314  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061324  847956 command_runner.go:130] >       "size": "123188534",
	I1114 15:12:21.061333  847956 command_runner.go:130] >       "uid": {
	I1114 15:12:21.061340  847956 command_runner.go:130] >         "value": "0"
	I1114 15:12:21.061346  847956 command_runner.go:130] >       },
	I1114 15:12:21.061357  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.061367  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.061377  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.061386  847956 command_runner.go:130] >     },
	I1114 15:12:21.061392  847956 command_runner.go:130] >     {
	I1114 15:12:21.061405  847956 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1114 15:12:21.061415  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.061423  847956 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1114 15:12:21.061431  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061439  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.061468  847956 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1114 15:12:21.061485  847956 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1114 15:12:21.061492  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061499  847956 command_runner.go:130] >       "size": "74691991",
	I1114 15:12:21.061506  847956 command_runner.go:130] >       "uid": null,
	I1114 15:12:21.061513  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.061521  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.061529  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.061536  847956 command_runner.go:130] >     },
	I1114 15:12:21.061542  847956 command_runner.go:130] >     {
	I1114 15:12:21.061552  847956 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1114 15:12:21.061563  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.061571  847956 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1114 15:12:21.061578  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061585  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.061615  847956 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1114 15:12:21.061625  847956 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1114 15:12:21.061631  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061642  847956 command_runner.go:130] >       "size": "61498678",
	I1114 15:12:21.061649  847956 command_runner.go:130] >       "uid": {
	I1114 15:12:21.061660  847956 command_runner.go:130] >         "value": "0"
	I1114 15:12:21.061666  847956 command_runner.go:130] >       },
	I1114 15:12:21.061676  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.061687  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.061696  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.061702  847956 command_runner.go:130] >     },
	I1114 15:12:21.061706  847956 command_runner.go:130] >     {
	I1114 15:12:21.061720  847956 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1114 15:12:21.061731  847956 command_runner.go:130] >       "repoTags": [
	I1114 15:12:21.061740  847956 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1114 15:12:21.061749  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061760  847956 command_runner.go:130] >       "repoDigests": [
	I1114 15:12:21.061774  847956 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1114 15:12:21.061789  847956 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1114 15:12:21.061798  847956 command_runner.go:130] >       ],
	I1114 15:12:21.061805  847956 command_runner.go:130] >       "size": "750414",
	I1114 15:12:21.061814  847956 command_runner.go:130] >       "uid": {
	I1114 15:12:21.061824  847956 command_runner.go:130] >         "value": "65535"
	I1114 15:12:21.061837  847956 command_runner.go:130] >       },
	I1114 15:12:21.061848  847956 command_runner.go:130] >       "username": "",
	I1114 15:12:21.061858  847956 command_runner.go:130] >       "spec": null,
	I1114 15:12:21.061868  847956 command_runner.go:130] >       "pinned": false
	I1114 15:12:21.061877  847956 command_runner.go:130] >     }
	I1114 15:12:21.061886  847956 command_runner.go:130] >   ]
	I1114 15:12:21.061894  847956 command_runner.go:130] > }
	I1114 15:12:21.062036  847956 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:12:21.062050  847956 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:12:21.062197  847956 ssh_runner.go:195] Run: crio config
	I1114 15:12:21.116466  847956 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1114 15:12:21.116576  847956 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1114 15:12:21.116596  847956 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1114 15:12:21.116619  847956 command_runner.go:130] > #
	I1114 15:12:21.116636  847956 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1114 15:12:21.116650  847956 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1114 15:12:21.116661  847956 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1114 15:12:21.116689  847956 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1114 15:12:21.116700  847956 command_runner.go:130] > # reload'.
	I1114 15:12:21.116711  847956 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1114 15:12:21.116725  847956 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1114 15:12:21.116734  847956 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1114 15:12:21.116758  847956 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1114 15:12:21.116766  847956 command_runner.go:130] > [crio]
	I1114 15:12:21.116779  847956 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1114 15:12:21.116791  847956 command_runner.go:130] > # containers images, in this directory.
	I1114 15:12:21.116841  847956 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1114 15:12:21.116861  847956 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1114 15:12:21.116874  847956 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1114 15:12:21.116888  847956 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1114 15:12:21.116902  847956 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1114 15:12:21.116930  847956 command_runner.go:130] > storage_driver = "overlay"
	I1114 15:12:21.116944  847956 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1114 15:12:21.116974  847956 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1114 15:12:21.116984  847956 command_runner.go:130] > storage_option = [
	I1114 15:12:21.117315  847956 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1114 15:12:21.117369  847956 command_runner.go:130] > ]
	I1114 15:12:21.117386  847956 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1114 15:12:21.117397  847956 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1114 15:12:21.117819  847956 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1114 15:12:21.117839  847956 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1114 15:12:21.117849  847956 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1114 15:12:21.117856  847956 command_runner.go:130] > # always happen on a node reboot
	I1114 15:12:21.118400  847956 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1114 15:12:21.118421  847956 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1114 15:12:21.118430  847956 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1114 15:12:21.118453  847956 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1114 15:12:21.118793  847956 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1114 15:12:21.118820  847956 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1114 15:12:21.118833  847956 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1114 15:12:21.119237  847956 command_runner.go:130] > # internal_wipe = true
	I1114 15:12:21.119258  847956 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1114 15:12:21.119268  847956 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1114 15:12:21.119284  847956 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1114 15:12:21.119569  847956 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1114 15:12:21.119585  847956 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1114 15:12:21.119592  847956 command_runner.go:130] > [crio.api]
	I1114 15:12:21.119601  847956 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1114 15:12:21.119961  847956 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1114 15:12:21.119973  847956 command_runner.go:130] > # IP address on which the stream server will listen.
	I1114 15:12:21.120437  847956 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1114 15:12:21.120451  847956 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1114 15:12:21.120460  847956 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1114 15:12:21.120932  847956 command_runner.go:130] > # stream_port = "0"
	I1114 15:12:21.120946  847956 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1114 15:12:21.121314  847956 command_runner.go:130] > # stream_enable_tls = false
	I1114 15:12:21.121329  847956 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1114 15:12:21.121334  847956 command_runner.go:130] > # stream_idle_timeout = ""
	I1114 15:12:21.121340  847956 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1114 15:12:21.121346  847956 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1114 15:12:21.121350  847956 command_runner.go:130] > # minutes.
	I1114 15:12:21.121355  847956 command_runner.go:130] > # stream_tls_cert = ""
	I1114 15:12:21.121363  847956 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1114 15:12:21.121370  847956 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1114 15:12:21.121375  847956 command_runner.go:130] > # stream_tls_key = ""
	I1114 15:12:21.121380  847956 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1114 15:12:21.121387  847956 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1114 15:12:21.121393  847956 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1114 15:12:21.121397  847956 command_runner.go:130] > # stream_tls_ca = ""
	I1114 15:12:21.121411  847956 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:12:21.121420  847956 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1114 15:12:21.121426  847956 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:12:21.121432  847956 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1114 15:12:21.121453  847956 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1114 15:12:21.121464  847956 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1114 15:12:21.121468  847956 command_runner.go:130] > [crio.runtime]
	I1114 15:12:21.121473  847956 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1114 15:12:21.121478  847956 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1114 15:12:21.121482  847956 command_runner.go:130] > # "nofile=1024:2048"
	I1114 15:12:21.121488  847956 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1114 15:12:21.121496  847956 command_runner.go:130] > # default_ulimits = [
	I1114 15:12:21.121514  847956 command_runner.go:130] > # ]
	I1114 15:12:21.121523  847956 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1114 15:12:21.121527  847956 command_runner.go:130] > # no_pivot = false
	I1114 15:12:21.121535  847956 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1114 15:12:21.121543  847956 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1114 15:12:21.121558  847956 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1114 15:12:21.121575  847956 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1114 15:12:21.121588  847956 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1114 15:12:21.121599  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:12:21.121606  847956 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1114 15:12:21.121611  847956 command_runner.go:130] > # Cgroup setting for conmon
	I1114 15:12:21.121618  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1114 15:12:21.121623  847956 command_runner.go:130] > conmon_cgroup = "pod"
	I1114 15:12:21.121629  847956 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1114 15:12:21.121636  847956 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1114 15:12:21.121643  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:12:21.121650  847956 command_runner.go:130] > conmon_env = [
	I1114 15:12:21.121678  847956 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1114 15:12:21.121690  847956 command_runner.go:130] > ]
	I1114 15:12:21.121696  847956 command_runner.go:130] > # Additional environment variables to set for all the
	I1114 15:12:21.121702  847956 command_runner.go:130] > # containers. These are overridden if set in the
	I1114 15:12:21.121713  847956 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1114 15:12:21.121723  847956 command_runner.go:130] > # default_env = [
	I1114 15:12:21.121729  847956 command_runner.go:130] > # ]
	I1114 15:12:21.121750  847956 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1114 15:12:21.121762  847956 command_runner.go:130] > # selinux = false
	I1114 15:12:21.121769  847956 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1114 15:12:21.121778  847956 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1114 15:12:21.121791  847956 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1114 15:12:21.121803  847956 command_runner.go:130] > # seccomp_profile = ""
	I1114 15:12:21.121812  847956 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1114 15:12:21.121822  847956 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1114 15:12:21.121836  847956 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1114 15:12:21.121846  847956 command_runner.go:130] > # which might increase security.
	I1114 15:12:21.121854  847956 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1114 15:12:21.121867  847956 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1114 15:12:21.121876  847956 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1114 15:12:21.121882  847956 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1114 15:12:21.121895  847956 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1114 15:12:21.121907  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:12:21.121916  847956 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1114 15:12:21.121928  847956 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1114 15:12:21.121942  847956 command_runner.go:130] > # the cgroup blockio controller.
	I1114 15:12:21.121954  847956 command_runner.go:130] > # blockio_config_file = ""
	I1114 15:12:21.121963  847956 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1114 15:12:21.121979  847956 command_runner.go:130] > # irqbalance daemon.
	I1114 15:12:21.121989  847956 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1114 15:12:21.122003  847956 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1114 15:12:21.122015  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:12:21.122025  847956 command_runner.go:130] > # rdt_config_file = ""
	I1114 15:12:21.122034  847956 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1114 15:12:21.122046  847956 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1114 15:12:21.122056  847956 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1114 15:12:21.122091  847956 command_runner.go:130] > # separate_pull_cgroup = ""
	I1114 15:12:21.122106  847956 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1114 15:12:21.122119  847956 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1114 15:12:21.122128  847956 command_runner.go:130] > # will be added.
	I1114 15:12:21.122135  847956 command_runner.go:130] > # default_capabilities = [
	I1114 15:12:21.122141  847956 command_runner.go:130] > # 	"CHOWN",
	I1114 15:12:21.122151  847956 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1114 15:12:21.122165  847956 command_runner.go:130] > # 	"FSETID",
	I1114 15:12:21.122175  847956 command_runner.go:130] > # 	"FOWNER",
	I1114 15:12:21.122185  847956 command_runner.go:130] > # 	"SETGID",
	I1114 15:12:21.122192  847956 command_runner.go:130] > # 	"SETUID",
	I1114 15:12:21.122201  847956 command_runner.go:130] > # 	"SETPCAP",
	I1114 15:12:21.122211  847956 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1114 15:12:21.122218  847956 command_runner.go:130] > # 	"KILL",
	I1114 15:12:21.122222  847956 command_runner.go:130] > # ]
	I1114 15:12:21.122239  847956 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1114 15:12:21.122259  847956 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:12:21.122269  847956 command_runner.go:130] > # default_sysctls = [
	I1114 15:12:21.122275  847956 command_runner.go:130] > # ]
	I1114 15:12:21.122286  847956 command_runner.go:130] > # List of devices on the host that a
	I1114 15:12:21.122300  847956 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1114 15:12:21.122310  847956 command_runner.go:130] > # allowed_devices = [
	I1114 15:12:21.122319  847956 command_runner.go:130] > # 	"/dev/fuse",
	I1114 15:12:21.122328  847956 command_runner.go:130] > # ]
	I1114 15:12:21.122337  847956 command_runner.go:130] > # List of additional devices. specified as
	I1114 15:12:21.122357  847956 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1114 15:12:21.122368  847956 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1114 15:12:21.122407  847956 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:12:21.122418  847956 command_runner.go:130] > # additional_devices = [
	I1114 15:12:21.122424  847956 command_runner.go:130] > # ]
	I1114 15:12:21.122436  847956 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1114 15:12:21.122447  847956 command_runner.go:130] > # cdi_spec_dirs = [
	I1114 15:12:21.122456  847956 command_runner.go:130] > # 	"/etc/cdi",
	I1114 15:12:21.122466  847956 command_runner.go:130] > # 	"/var/run/cdi",
	I1114 15:12:21.122474  847956 command_runner.go:130] > # ]
	I1114 15:12:21.122484  847956 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1114 15:12:21.122491  847956 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1114 15:12:21.122501  847956 command_runner.go:130] > # Defaults to false.
	I1114 15:12:21.122510  847956 command_runner.go:130] > # device_ownership_from_security_context = false
	I1114 15:12:21.122524  847956 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1114 15:12:21.122535  847956 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1114 15:12:21.122545  847956 command_runner.go:130] > # hooks_dir = [
	I1114 15:12:21.122553  847956 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1114 15:12:21.122565  847956 command_runner.go:130] > # ]
	I1114 15:12:21.122579  847956 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1114 15:12:21.122590  847956 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1114 15:12:21.122598  847956 command_runner.go:130] > # its default mounts from the following two files:
	I1114 15:12:21.122604  847956 command_runner.go:130] > #
	I1114 15:12:21.122616  847956 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1114 15:12:21.122632  847956 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1114 15:12:21.122645  847956 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1114 15:12:21.122654  847956 command_runner.go:130] > #
	I1114 15:12:21.122665  847956 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1114 15:12:21.122678  847956 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1114 15:12:21.122691  847956 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1114 15:12:21.122700  847956 command_runner.go:130] > #      only add mounts it finds in this file.
	I1114 15:12:21.122705  847956 command_runner.go:130] > #
	I1114 15:12:21.122751  847956 command_runner.go:130] > # default_mounts_file = ""
	I1114 15:12:21.122765  847956 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1114 15:12:21.122776  847956 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1114 15:12:21.122781  847956 command_runner.go:130] > pids_limit = 1024
	I1114 15:12:21.122799  847956 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1114 15:12:21.122810  847956 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1114 15:12:21.122822  847956 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1114 15:12:21.122839  847956 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1114 15:12:21.122848  847956 command_runner.go:130] > # log_size_max = -1
	I1114 15:12:21.122860  847956 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1114 15:12:21.122869  847956 command_runner.go:130] > # log_to_journald = false
	I1114 15:12:21.122876  847956 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1114 15:12:21.122887  847956 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1114 15:12:21.122897  847956 command_runner.go:130] > # Path to directory for container attach sockets.
	I1114 15:12:21.122909  847956 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1114 15:12:21.122921  847956 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1114 15:12:21.122931  847956 command_runner.go:130] > # bind_mount_prefix = ""
	I1114 15:12:21.122946  847956 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1114 15:12:21.122975  847956 command_runner.go:130] > # read_only = false
	I1114 15:12:21.122989  847956 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1114 15:12:21.123000  847956 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1114 15:12:21.123011  847956 command_runner.go:130] > # live configuration reload.
	I1114 15:12:21.123024  847956 command_runner.go:130] > # log_level = "info"
	I1114 15:12:21.123036  847956 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1114 15:12:21.123048  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:12:21.123056  847956 command_runner.go:130] > # log_filter = ""
	I1114 15:12:21.123067  847956 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1114 15:12:21.123081  847956 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1114 15:12:21.123092  847956 command_runner.go:130] > # separated by comma.
	I1114 15:12:21.123104  847956 command_runner.go:130] > # uid_mappings = ""
	I1114 15:12:21.123118  847956 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1114 15:12:21.123131  847956 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1114 15:12:21.123139  847956 command_runner.go:130] > # separated by comma.
	I1114 15:12:21.123144  847956 command_runner.go:130] > # gid_mappings = ""
	I1114 15:12:21.123156  847956 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1114 15:12:21.123170  847956 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:12:21.123182  847956 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:12:21.123192  847956 command_runner.go:130] > # minimum_mappable_uid = -1
	I1114 15:12:21.123205  847956 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1114 15:12:21.123218  847956 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:12:21.123235  847956 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:12:21.123242  847956 command_runner.go:130] > # minimum_mappable_gid = -1
	I1114 15:12:21.123253  847956 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1114 15:12:21.123267  847956 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1114 15:12:21.123280  847956 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1114 15:12:21.123290  847956 command_runner.go:130] > # ctr_stop_timeout = 30
	I1114 15:12:21.123303  847956 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1114 15:12:21.123316  847956 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1114 15:12:21.123327  847956 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1114 15:12:21.123336  847956 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1114 15:12:21.123341  847956 command_runner.go:130] > drop_infra_ctr = false
	I1114 15:12:21.123355  847956 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1114 15:12:21.123367  847956 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1114 15:12:21.123382  847956 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1114 15:12:21.123391  847956 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1114 15:12:21.123403  847956 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1114 15:12:21.123415  847956 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1114 15:12:21.123426  847956 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1114 15:12:21.123444  847956 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1114 15:12:21.123456  847956 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1114 15:12:21.123470  847956 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1114 15:12:21.123481  847956 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1114 15:12:21.123495  847956 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1114 15:12:21.123505  847956 command_runner.go:130] > # default_runtime = "runc"
	I1114 15:12:21.123513  847956 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1114 15:12:21.123530  847956 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1114 15:12:21.123548  847956 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1114 15:12:21.123559  847956 command_runner.go:130] > # creation as a file is not desired either.
	I1114 15:12:21.123576  847956 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1114 15:12:21.123587  847956 command_runner.go:130] > # the hostname is being managed dynamically.
	I1114 15:12:21.123618  847956 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1114 15:12:21.123628  847956 command_runner.go:130] > # ]
	I1114 15:12:21.123640  847956 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1114 15:12:21.123654  847956 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1114 15:12:21.123667  847956 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1114 15:12:21.123680  847956 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1114 15:12:21.123689  847956 command_runner.go:130] > #
	I1114 15:12:21.123700  847956 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1114 15:12:21.123712  847956 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1114 15:12:21.123723  847956 command_runner.go:130] > #  runtime_type = "oci"
	I1114 15:12:21.123735  847956 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1114 15:12:21.123746  847956 command_runner.go:130] > #  privileged_without_host_devices = false
	I1114 15:12:21.123757  847956 command_runner.go:130] > #  allowed_annotations = []
	I1114 15:12:21.123766  847956 command_runner.go:130] > # Where:
	I1114 15:12:21.123775  847956 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1114 15:12:21.123788  847956 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1114 15:12:21.123806  847956 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1114 15:12:21.123876  847956 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1114 15:12:21.123889  847956 command_runner.go:130] > #   in $PATH.
	I1114 15:12:21.123896  847956 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1114 15:12:21.123903  847956 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1114 15:12:21.123909  847956 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1114 15:12:21.123915  847956 command_runner.go:130] > #   state.
	I1114 15:12:21.123922  847956 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1114 15:12:21.123936  847956 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1114 15:12:21.123945  847956 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1114 15:12:21.123950  847956 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1114 15:12:21.123959  847956 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1114 15:12:21.123966  847956 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1114 15:12:21.123973  847956 command_runner.go:130] > #   The currently recognized values are:
	I1114 15:12:21.123979  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1114 15:12:21.123991  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1114 15:12:21.124001  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1114 15:12:21.124008  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1114 15:12:21.124017  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1114 15:12:21.124024  847956 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1114 15:12:21.124032  847956 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1114 15:12:21.124038  847956 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1114 15:12:21.124045  847956 command_runner.go:130] > #   should be moved to the container's cgroup
	I1114 15:12:21.124050  847956 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1114 15:12:21.124056  847956 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1114 15:12:21.124060  847956 command_runner.go:130] > runtime_type = "oci"
	I1114 15:12:21.124070  847956 command_runner.go:130] > runtime_root = "/run/runc"
	I1114 15:12:21.124077  847956 command_runner.go:130] > runtime_config_path = ""
	I1114 15:12:21.124081  847956 command_runner.go:130] > monitor_path = ""
	I1114 15:12:21.124087  847956 command_runner.go:130] > monitor_cgroup = ""
	I1114 15:12:21.124091  847956 command_runner.go:130] > monitor_exec_cgroup = ""
	I1114 15:12:21.124100  847956 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1114 15:12:21.124104  847956 command_runner.go:130] > # running containers
	I1114 15:12:21.124109  847956 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1114 15:12:21.124115  847956 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1114 15:12:21.124163  847956 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1114 15:12:21.124171  847956 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1114 15:12:21.124176  847956 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1114 15:12:21.124181  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1114 15:12:21.124188  847956 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1114 15:12:21.124192  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1114 15:12:21.124197  847956 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1114 15:12:21.124204  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1114 15:12:21.124210  847956 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1114 15:12:21.124219  847956 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1114 15:12:21.124228  847956 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1114 15:12:21.124235  847956 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1114 15:12:21.124245  847956 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1114 15:12:21.124251  847956 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1114 15:12:21.124262  847956 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1114 15:12:21.124270  847956 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1114 15:12:21.124279  847956 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1114 15:12:21.124286  847956 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1114 15:12:21.124291  847956 command_runner.go:130] > # Example:
	I1114 15:12:21.124328  847956 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1114 15:12:21.124335  847956 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1114 15:12:21.124339  847956 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1114 15:12:21.124344  847956 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1114 15:12:21.124348  847956 command_runner.go:130] > # cpuset = 0
	I1114 15:12:21.124356  847956 command_runner.go:130] > # cpushares = "0-1"
	I1114 15:12:21.124363  847956 command_runner.go:130] > # Where:
	I1114 15:12:21.124377  847956 command_runner.go:130] > # The workload name is workload-type.
	I1114 15:12:21.124394  847956 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1114 15:12:21.124402  847956 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1114 15:12:21.124410  847956 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1114 15:12:21.124424  847956 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1114 15:12:21.124446  847956 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1114 15:12:21.124452  847956 command_runner.go:130] > # 
	I1114 15:12:21.124462  847956 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1114 15:12:21.124471  847956 command_runner.go:130] > #
	I1114 15:12:21.124481  847956 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1114 15:12:21.124494  847956 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1114 15:12:21.124506  847956 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1114 15:12:21.124517  847956 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1114 15:12:21.124534  847956 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1114 15:12:21.124543  847956 command_runner.go:130] > [crio.image]
	I1114 15:12:21.124553  847956 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1114 15:12:21.124562  847956 command_runner.go:130] > # default_transport = "docker://"
	I1114 15:12:21.124571  847956 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1114 15:12:21.124583  847956 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:12:21.124596  847956 command_runner.go:130] > # global_auth_file = ""
	I1114 15:12:21.124607  847956 command_runner.go:130] > # The image used to instantiate infra containers.
	I1114 15:12:21.124617  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:12:21.124628  847956 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1114 15:12:21.124639  847956 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1114 15:12:21.124647  847956 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:12:21.124652  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:12:21.124660  847956 command_runner.go:130] > # pause_image_auth_file = ""
	I1114 15:12:21.124666  847956 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1114 15:12:21.124673  847956 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1114 15:12:21.124681  847956 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1114 15:12:21.124689  847956 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1114 15:12:21.124693  847956 command_runner.go:130] > # pause_command = "/pause"
	I1114 15:12:21.124702  847956 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1114 15:12:21.124710  847956 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1114 15:12:21.124719  847956 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1114 15:12:21.124725  847956 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1114 15:12:21.124732  847956 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1114 15:12:21.124750  847956 command_runner.go:130] > # signature_policy = ""
	I1114 15:12:21.124762  847956 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1114 15:12:21.124771  847956 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1114 15:12:21.124778  847956 command_runner.go:130] > # changing them here.
	I1114 15:12:21.124785  847956 command_runner.go:130] > # insecure_registries = [
	I1114 15:12:21.124791  847956 command_runner.go:130] > # ]
	I1114 15:12:21.124800  847956 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1114 15:12:21.124805  847956 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1114 15:12:21.124809  847956 command_runner.go:130] > # image_volumes = "mkdir"
	I1114 15:12:21.124814  847956 command_runner.go:130] > # Temporary directory to use for storing big files
	I1114 15:12:21.124818  847956 command_runner.go:130] > # big_files_temporary_dir = ""
	I1114 15:12:21.124824  847956 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1114 15:12:21.124827  847956 command_runner.go:130] > # CNI plugins.
	I1114 15:12:21.124831  847956 command_runner.go:130] > [crio.network]
	I1114 15:12:21.124836  847956 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1114 15:12:21.124841  847956 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1114 15:12:21.124845  847956 command_runner.go:130] > # cni_default_network = ""
	I1114 15:12:21.124851  847956 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1114 15:12:21.124857  847956 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1114 15:12:21.124863  847956 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1114 15:12:21.124866  847956 command_runner.go:130] > # plugin_dirs = [
	I1114 15:12:21.124870  847956 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1114 15:12:21.124873  847956 command_runner.go:130] > # ]
	I1114 15:12:21.124881  847956 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1114 15:12:21.124885  847956 command_runner.go:130] > [crio.metrics]
	I1114 15:12:21.124889  847956 command_runner.go:130] > # Globally enable or disable metrics support.
	I1114 15:12:21.124893  847956 command_runner.go:130] > enable_metrics = true
	I1114 15:12:21.124897  847956 command_runner.go:130] > # Specify enabled metrics collectors.
	I1114 15:12:21.124902  847956 command_runner.go:130] > # Per default all metrics are enabled.
	I1114 15:12:21.124911  847956 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1114 15:12:21.124918  847956 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1114 15:12:21.124924  847956 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1114 15:12:21.124927  847956 command_runner.go:130] > # metrics_collectors = [
	I1114 15:12:21.124931  847956 command_runner.go:130] > # 	"operations",
	I1114 15:12:21.124936  847956 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1114 15:12:21.124940  847956 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1114 15:12:21.124955  847956 command_runner.go:130] > # 	"operations_errors",
	I1114 15:12:21.124970  847956 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1114 15:12:21.125004  847956 command_runner.go:130] > # 	"image_pulls_by_name",
	I1114 15:12:21.125012  847956 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1114 15:12:21.125016  847956 command_runner.go:130] > # 	"image_pulls_failures",
	I1114 15:12:21.125020  847956 command_runner.go:130] > # 	"image_pulls_successes",
	I1114 15:12:21.125024  847956 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1114 15:12:21.125028  847956 command_runner.go:130] > # 	"image_layer_reuse",
	I1114 15:12:21.125035  847956 command_runner.go:130] > # 	"containers_oom_total",
	I1114 15:12:21.125039  847956 command_runner.go:130] > # 	"containers_oom",
	I1114 15:12:21.125047  847956 command_runner.go:130] > # 	"processes_defunct",
	I1114 15:12:21.125051  847956 command_runner.go:130] > # 	"operations_total",
	I1114 15:12:21.125057  847956 command_runner.go:130] > # 	"operations_latency_seconds",
	I1114 15:12:21.125061  847956 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1114 15:12:21.125068  847956 command_runner.go:130] > # 	"operations_errors_total",
	I1114 15:12:21.125072  847956 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1114 15:12:21.125077  847956 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1114 15:12:21.125082  847956 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1114 15:12:21.125089  847956 command_runner.go:130] > # 	"image_pulls_success_total",
	I1114 15:12:21.125096  847956 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1114 15:12:21.125100  847956 command_runner.go:130] > # 	"containers_oom_count_total",
	I1114 15:12:21.125106  847956 command_runner.go:130] > # ]
	I1114 15:12:21.125111  847956 command_runner.go:130] > # The port on which the metrics server will listen.
	I1114 15:12:21.125118  847956 command_runner.go:130] > # metrics_port = 9090
	I1114 15:12:21.125123  847956 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1114 15:12:21.125129  847956 command_runner.go:130] > # metrics_socket = ""
	I1114 15:12:21.125134  847956 command_runner.go:130] > # The certificate for the secure metrics server.
	I1114 15:12:21.125143  847956 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1114 15:12:21.125151  847956 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1114 15:12:21.125156  847956 command_runner.go:130] > # certificate on any modification event.
	I1114 15:12:21.125162  847956 command_runner.go:130] > # metrics_cert = ""
	I1114 15:12:21.125168  847956 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1114 15:12:21.125175  847956 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1114 15:12:21.125179  847956 command_runner.go:130] > # metrics_key = ""
	I1114 15:12:21.125185  847956 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1114 15:12:21.125191  847956 command_runner.go:130] > [crio.tracing]
	I1114 15:12:21.125200  847956 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1114 15:12:21.125207  847956 command_runner.go:130] > # enable_tracing = false
	I1114 15:12:21.125214  847956 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1114 15:12:21.125221  847956 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1114 15:12:21.125226  847956 command_runner.go:130] > # Number of samples to collect per million spans.
	I1114 15:12:21.125233  847956 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1114 15:12:21.125239  847956 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1114 15:12:21.125245  847956 command_runner.go:130] > [crio.stats]
	I1114 15:12:21.125251  847956 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1114 15:12:21.125258  847956 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1114 15:12:21.125262  847956 command_runner.go:130] > # stats_collection_period = 0
	I1114 15:12:21.125511  847956 command_runner.go:130] ! time="2023-11-14 15:12:21.062944225Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1114 15:12:21.125540  847956 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1114 15:12:21.125659  847956 cni.go:84] Creating CNI manager for ""
	I1114 15:12:21.125673  847956 cni.go:136] 3 nodes found, recommending kindnet
	I1114 15:12:21.125697  847956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:12:21.125740  847956 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.63 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-627820 NodeName:multinode-627820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:12:21.125928  847956 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-627820"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:12:21.126009  847956 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-627820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:12:21.126070  847956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:12:21.134831  847956 command_runner.go:130] > kubeadm
	I1114 15:12:21.134848  847956 command_runner.go:130] > kubectl
	I1114 15:12:21.134855  847956 command_runner.go:130] > kubelet
	I1114 15:12:21.134900  847956 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:12:21.134981  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:12:21.144400  847956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1114 15:12:21.161656  847956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:12:21.178166  847956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1114 15:12:21.195712  847956 ssh_runner.go:195] Run: grep 192.168.39.63	control-plane.minikube.internal$ /etc/hosts
	I1114 15:12:21.199590  847956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:12:21.211441  847956 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820 for IP: 192.168.39.63
	I1114 15:12:21.211506  847956 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:12:21.211675  847956 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:12:21.211724  847956 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:12:21.211854  847956 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key
	I1114 15:12:21.211933  847956 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key.423148a4
	I1114 15:12:21.211985  847956 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.key
	I1114 15:12:21.212003  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1114 15:12:21.212037  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1114 15:12:21.212054  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1114 15:12:21.212069  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1114 15:12:21.212092  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 15:12:21.212111  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 15:12:21.212130  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 15:12:21.212145  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 15:12:21.212242  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:12:21.212281  847956 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:12:21.212301  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:12:21.212336  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:12:21.212373  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:12:21.212408  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:12:21.212462  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:12:21.212505  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /usr/share/ca-certificates/8322112.pem
	I1114 15:12:21.212527  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:12:21.212548  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem -> /usr/share/ca-certificates/832211.pem
	I1114 15:12:21.213787  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:12:21.239302  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:12:21.262661  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:12:21.285195  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:12:21.307480  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:12:21.329474  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:12:21.350907  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:12:21.373829  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:12:21.395746  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:12:21.417617  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:12:21.439749  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:12:21.461278  847956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:12:21.476848  847956 ssh_runner.go:195] Run: openssl version
	I1114 15:12:21.482208  847956 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1114 15:12:21.482290  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:12:21.492674  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:12:21.497513  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:12:21.497540  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:12:21.497592  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:12:21.502828  847956 command_runner.go:130] > 3ec20f2e
	I1114 15:12:21.503065  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:12:21.513489  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:12:21.523716  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:12:21.528119  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:12:21.528193  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:12:21.528239  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:12:21.533345  847956 command_runner.go:130] > b5213941
	I1114 15:12:21.533621  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:12:21.543697  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:12:21.553565  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:12:21.558088  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:12:21.558122  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:12:21.558161  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:12:21.563506  847956 command_runner.go:130] > 51391683
	I1114 15:12:21.563577  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:12:21.573588  847956 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:12:21.577953  847956 command_runner.go:130] > ca.crt
	I1114 15:12:21.577972  847956 command_runner.go:130] > ca.key
	I1114 15:12:21.577980  847956 command_runner.go:130] > healthcheck-client.crt
	I1114 15:12:21.577985  847956 command_runner.go:130] > healthcheck-client.key
	I1114 15:12:21.577989  847956 command_runner.go:130] > peer.crt
	I1114 15:12:21.577993  847956 command_runner.go:130] > peer.key
	I1114 15:12:21.577997  847956 command_runner.go:130] > server.crt
	I1114 15:12:21.578001  847956 command_runner.go:130] > server.key
	I1114 15:12:21.578056  847956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:12:21.583404  847956 command_runner.go:130] > Certificate will not expire
	I1114 15:12:21.583622  847956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:12:21.589048  847956 command_runner.go:130] > Certificate will not expire
	I1114 15:12:21.589117  847956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:12:21.594406  847956 command_runner.go:130] > Certificate will not expire
	I1114 15:12:21.594553  847956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:12:21.599928  847956 command_runner.go:130] > Certificate will not expire
	I1114 15:12:21.600139  847956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:12:21.605540  847956 command_runner.go:130] > Certificate will not expire
	I1114 15:12:21.605862  847956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:12:21.611454  847956 command_runner.go:130] > Certificate will not expire
	I1114 15:12:21.611557  847956 kubeadm.go:404] StartCluster: {Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:12:21.611717  847956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:12:21.611781  847956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:12:21.649015  847956 cri.go:89] found id: ""
	I1114 15:12:21.649093  847956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:12:21.658980  847956 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1114 15:12:21.659004  847956 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1114 15:12:21.659010  847956 command_runner.go:130] > /var/lib/minikube/etcd:
	I1114 15:12:21.659014  847956 command_runner.go:130] > member
	I1114 15:12:21.659030  847956 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:12:21.659037  847956 kubeadm.go:636] restartCluster start
	I1114 15:12:21.659115  847956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:12:21.668289  847956 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:21.668900  847956 kubeconfig.go:92] found "multinode-627820" server: "https://192.168.39.63:8443"
	I1114 15:12:21.669436  847956 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:12:21.669695  847956 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:12:21.670336  847956 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 15:12:21.670810  847956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:12:21.679761  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:21.679826  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:21.690864  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:21.690883  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:21.690921  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:21.701107  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:22.201868  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:22.201981  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:22.214228  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:22.701754  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:22.701863  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:22.714337  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:23.201942  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:23.202057  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:23.214100  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:23.701193  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:23.701281  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:23.713235  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:24.201966  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:24.202107  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:24.213700  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:24.701185  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:24.701310  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:24.713471  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:25.201998  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:25.202108  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:25.213955  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:25.701484  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:25.701597  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:25.714237  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:26.201873  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:26.201970  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:26.215682  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:26.701966  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:26.702097  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:26.714305  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:27.201955  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:27.202089  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:27.215554  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:27.702149  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:27.702262  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:27.715531  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:28.202197  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:28.202328  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:28.216033  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:28.701164  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:28.701261  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:28.714453  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:29.202169  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:29.202267  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:29.214203  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:29.701803  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:29.701905  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:29.713514  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:30.201885  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:30.201969  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:30.213991  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:30.702254  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:30.702352  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:30.714360  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:31.201906  847956 api_server.go:166] Checking apiserver status ...
	I1114 15:12:31.202018  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:12:31.213744  847956 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:12:31.680477  847956 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:12:31.680534  847956 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:12:31.680548  847956 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:12:31.680606  847956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:12:31.722493  847956 cri.go:89] found id: ""
	I1114 15:12:31.722601  847956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:12:31.737933  847956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:12:31.747091  847956 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1114 15:12:31.747146  847956 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1114 15:12:31.747161  847956 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1114 15:12:31.747182  847956 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:12:31.747238  847956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:12:31.747310  847956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:12:31.758872  847956 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:12:31.758900  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:12:31.871542  847956 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:12:31.871572  847956 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1114 15:12:31.871582  847956 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1114 15:12:31.871614  847956 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:12:31.872463  847956 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1114 15:12:31.872994  847956 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:12:31.873870  847956 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1114 15:12:31.874431  847956 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1114 15:12:31.874878  847956 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:12:31.875379  847956 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:12:31.875914  847956 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:12:31.876886  847956 command_runner.go:130] > [certs] Using the existing "sa" key
	I1114 15:12:31.878392  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:12:32.719332  847956 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:12:32.719368  847956 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:12:32.719379  847956 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:12:32.719389  847956 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:12:32.719400  847956 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:12:32.719450  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:12:32.788731  847956 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:12:32.790128  847956 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:12:32.790148  847956 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 15:12:32.920031  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:12:32.992171  847956 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:12:32.992205  847956 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:12:32.992216  847956 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:12:32.992226  847956 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:12:32.992254  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:12:33.051247  847956 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:12:33.054774  847956 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:12:33.054845  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:12:33.066586  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:12:33.579269  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:12:34.079495  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:12:34.578818  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:12:35.079072  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:12:35.579200  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:12:35.601658  847956 command_runner.go:130] > 1083
	I1114 15:12:35.602947  847956 api_server.go:72] duration metric: took 2.548172349s to wait for apiserver process to appear ...
	I1114 15:12:35.602978  847956 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:12:35.602998  847956 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I1114 15:12:39.307341  847956 api_server.go:279] https://192.168.39.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:12:39.307378  847956 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:12:39.307393  847956 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I1114 15:12:39.381124  847956 api_server.go:279] https://192.168.39.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:12:39.381165  847956 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:12:39.881958  847956 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I1114 15:12:39.888136  847956 api_server.go:279] https://192.168.39.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:12:39.888182  847956 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:12:40.381672  847956 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I1114 15:12:40.386339  847956 api_server.go:279] https://192.168.39.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:12:40.386370  847956 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:12:40.882181  847956 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I1114 15:12:40.892609  847956 api_server.go:279] https://192.168.39.63:8443/healthz returned 200:
	ok
	I1114 15:12:40.892805  847956 round_trippers.go:463] GET https://192.168.39.63:8443/version
	I1114 15:12:40.892820  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:40.892851  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:40.892866  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:40.901335  847956 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1114 15:12:40.901364  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:40.901375  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:40.901383  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:40.901391  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:40.901398  847956 round_trippers.go:580]     Content-Length: 264
	I1114 15:12:40.901415  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:40 GMT
	I1114 15:12:40.901430  847956 round_trippers.go:580]     Audit-Id: c9069938-0956-412a-9ce8-c50e7e4076bd
	I1114 15:12:40.901438  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:40.901504  847956 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1114 15:12:40.901609  847956 api_server.go:141] control plane version: v1.28.3
	I1114 15:12:40.901646  847956 api_server.go:131] duration metric: took 5.298659836s to wait for apiserver health ...
	I1114 15:12:40.901660  847956 cni.go:84] Creating CNI manager for ""
	I1114 15:12:40.901670  847956 cni.go:136] 3 nodes found, recommending kindnet
	I1114 15:12:40.903200  847956 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 15:12:40.904826  847956 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 15:12:40.922402  847956 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 15:12:40.922430  847956 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1114 15:12:40.922446  847956 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1114 15:12:40.922454  847956 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:12:40.922462  847956 command_runner.go:130] > Access: 2023-11-14 15:12:06.839117816 +0000
	I1114 15:12:40.922468  847956 command_runner.go:130] > Modify: 2023-11-09 04:45:09.000000000 +0000
	I1114 15:12:40.922476  847956 command_runner.go:130] > Change: 2023-11-14 15:12:04.750117816 +0000
	I1114 15:12:40.922482  847956 command_runner.go:130] >  Birth: -
	I1114 15:12:40.922886  847956 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 15:12:40.922911  847956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 15:12:40.959127  847956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 15:12:42.178135  847956 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1114 15:12:42.182914  847956 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1114 15:12:42.186035  847956 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1114 15:12:42.203416  847956 command_runner.go:130] > daemonset.apps/kindnet configured
	I1114 15:12:42.206067  847956 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.246896403s)
	I1114 15:12:42.206135  847956 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:12:42.206352  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:12:42.206370  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.206380  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.206393  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.211713  847956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 15:12:42.211741  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.211753  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.211775  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.211794  847956 round_trippers.go:580]     Audit-Id: c27eb0a4-2e7a-4f98-9c51-7b920a7ccdb4
	I1114 15:12:42.211802  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.211811  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.211820  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.216474  847956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"796"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83096 chars]
	I1114 15:12:42.221535  847956 system_pods.go:59] 12 kube-system pods found
	I1114 15:12:42.221612  847956 system_pods.go:61] "coredns-5dd5756b68-vh8ng" [25afe3b4-014e-4180-9597-fb237d622c81] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:12:42.221630  847956 system_pods.go:61] "etcd-multinode-627820" [f7ab1cba-820a-4cad-8607-dcf55b587b77] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:12:42.221642  847956 system_pods.go:61] "kindnet-2d26z" [0ca83d6c-6208-49c7-b979-775971913b25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1114 15:12:42.221660  847956 system_pods.go:61] "kindnet-8wr7d" [d43cbd11-a37d-4e27-85b3-47ede6e9516b] Running
	I1114 15:12:42.221678  847956 system_pods.go:61] "kindnet-f8xnr" [457f993f-4895-488a-8277-d5187afda5d3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1114 15:12:42.221691  847956 system_pods.go:61] "kube-apiserver-multinode-627820" [8a9b9224-3446-46f7-b525-e1f32bb9a33c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:12:42.221704  847956 system_pods.go:61] "kube-controller-manager-multinode-627820" [b4440d06-27f9-4455-ae59-2d8c744b99a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:12:42.221716  847956 system_pods.go:61] "kube-proxy-4hf2k" [205bb9ac-4540-41d6-adb8-078c02d91b4e] Running
	I1114 15:12:42.221726  847956 system_pods.go:61] "kube-proxy-6xg9v" [2304a457-3a85-4791-8d18-4e1262db399f] Running
	I1114 15:12:42.221737  847956 system_pods.go:61] "kube-proxy-m24mc" [73a6d4c8-2f95-4818-bc62-566099466b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:12:42.221751  847956 system_pods.go:61] "kube-scheduler-multinode-627820" [ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:12:42.221761  847956 system_pods.go:61] "storage-provisioner" [f9cf343d-66fc-4de5-b0e0-df38ace21868] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:12:42.221774  847956 system_pods.go:74] duration metric: took 15.624971ms to wait for pod list to return data ...
	I1114 15:12:42.221788  847956 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:12:42.221867  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes
	I1114 15:12:42.221879  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.221890  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.221901  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.228853  847956 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 15:12:42.228876  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.228886  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.228912  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.228921  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.228937  847956 round_trippers.go:580]     Audit-Id: cbc8947d-1121-418d-8b83-3402ebdb22fa
	I1114 15:12:42.228944  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.228951  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.229373  847956 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15370 chars]
	I1114 15:12:42.230520  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:12:42.230554  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:12:42.230600  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:12:42.230607  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:12:42.230611  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:12:42.230618  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:12:42.230623  847956 node_conditions.go:105] duration metric: took 8.828352ms to run NodePressure ...
	I1114 15:12:42.230649  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:12:42.412360  847956 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1114 15:12:42.475041  847956 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1114 15:12:42.476680  847956 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:12:42.476844  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1114 15:12:42.476861  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.476869  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.476875  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.479541  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:42.479562  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.479572  847956 round_trippers.go:580]     Audit-Id: 3b9ef86f-0a23-465e-94d3-ea4bffea8ffd
	I1114 15:12:42.479579  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.479586  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.479593  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.479602  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.479614  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.480228  847956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"802"},"items":[{"metadata":{"name":"etcd-multinode-627820","namespace":"kube-system","uid":"f7ab1cba-820a-4cad-8607-dcf55b587b77","resourceVersion":"759","creationTimestamp":"2023-11-14T15:02:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.63:2379","kubernetes.io/config.hash":"9e94d5d69871d944e272883491976489","kubernetes.io/config.mirror":"9e94d5d69871d944e272883491976489","kubernetes.io/config.seen":"2023-11-14T15:02:10.404956486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I1114 15:12:42.481292  847956 kubeadm.go:787] kubelet initialised
	I1114 15:12:42.481312  847956 kubeadm.go:788] duration metric: took 4.606717ms waiting for restarted kubelet to initialise ...
	I1114 15:12:42.481320  847956 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:12:42.481386  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:12:42.481399  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.481406  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.481411  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.484589  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:42.484614  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.484624  847956 round_trippers.go:580]     Audit-Id: 6570231f-e88a-420e-b2a2-b2dd13c33bd2
	I1114 15:12:42.484633  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.484653  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.484667  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.484679  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.484692  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.486494  847956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"802"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82595 chars]
	I1114 15:12:42.489052  847956 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:42.489147  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:42.489159  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.489170  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.489180  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.490993  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:42.491015  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.491024  847956 round_trippers.go:580]     Audit-Id: 15ea2c21-5c0c-491f-8c1a-5ed0d00299fb
	I1114 15:12:42.491031  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.491038  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.491052  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.491059  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.491067  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.491273  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:42.491828  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:42.491847  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.491857  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.491866  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.494043  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:42.494064  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.494074  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.494082  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.494089  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.494097  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.494105  847956 round_trippers.go:580]     Audit-Id: a1d42440-d878-4791-8822-3db0c91cb8c1
	I1114 15:12:42.494116  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.494277  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:42.494680  847956 pod_ready.go:97] node "multinode-627820" hosting pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:42.494703  847956 pod_ready.go:81] duration metric: took 5.630655ms waiting for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	E1114 15:12:42.494712  847956 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-627820" hosting pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:42.494719  847956 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:42.494778  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-627820
	I1114 15:12:42.494785  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.494792  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.494798  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.497076  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:42.497100  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.497111  847956 round_trippers.go:580]     Audit-Id: 48f19a14-6ac9-4998-9e4a-4122a86638d2
	I1114 15:12:42.497119  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.497127  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.497137  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.497147  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.497157  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.497336  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-627820","namespace":"kube-system","uid":"f7ab1cba-820a-4cad-8607-dcf55b587b77","resourceVersion":"759","creationTimestamp":"2023-11-14T15:02:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.63:2379","kubernetes.io/config.hash":"9e94d5d69871d944e272883491976489","kubernetes.io/config.mirror":"9e94d5d69871d944e272883491976489","kubernetes.io/config.seen":"2023-11-14T15:02:10.404956486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1114 15:12:42.497811  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:42.497831  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.497841  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.497850  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.500036  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:42.500054  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.500063  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.500072  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.500080  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.500096  847956 round_trippers.go:580]     Audit-Id: af65bab2-432e-41d8-a613-807fec60ebbc
	I1114 15:12:42.500105  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.500112  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.500912  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:42.501298  847956 pod_ready.go:97] node "multinode-627820" hosting pod "etcd-multinode-627820" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:42.501322  847956 pod_ready.go:81] duration metric: took 6.595741ms waiting for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	E1114 15:12:42.501334  847956 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-627820" hosting pod "etcd-multinode-627820" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:42.501351  847956 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:42.501437  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-627820
	I1114 15:12:42.501449  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.501469  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.501484  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.503565  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:42.503584  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.503594  847956 round_trippers.go:580]     Audit-Id: 0646aa5e-90e8-42e5-8e0f-ededd48f64e4
	I1114 15:12:42.503602  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.503614  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.503622  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.503630  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.503636  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.503744  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-627820","namespace":"kube-system","uid":"8a9b9224-3446-46f7-b525-e1f32bb9a33c","resourceVersion":"753","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.63:8443","kubernetes.io/config.hash":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.mirror":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.seen":"2023-11-14T15:02:19.515752674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1114 15:12:42.504232  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:42.504250  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.504257  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.504262  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.505835  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:42.505859  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.505870  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.505883  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.505889  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.505894  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.505899  847956 round_trippers.go:580]     Audit-Id: b1642af2-35a0-4dfc-8e0f-8183a8e1a539
	I1114 15:12:42.505905  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.506057  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:42.506427  847956 pod_ready.go:97] node "multinode-627820" hosting pod "kube-apiserver-multinode-627820" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:42.506449  847956 pod_ready.go:81] duration metric: took 5.085686ms waiting for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	E1114 15:12:42.506472  847956 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-627820" hosting pod "kube-apiserver-multinode-627820" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:42.506479  847956 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:42.506538  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-627820
	I1114 15:12:42.506549  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.506556  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.506562  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.509499  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:42.509518  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.509527  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.509534  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.509541  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.509549  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.509557  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.509569  847956 round_trippers.go:580]     Audit-Id: 8a6cc12f-23c6-45c6-999e-241e0fc47e06
	I1114 15:12:42.509762  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-627820","namespace":"kube-system","uid":"b4440d06-27f9-4455-ae59-2d8c744b99a2","resourceVersion":"761","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.mirror":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.seen":"2023-11-14T15:02:19.515747223Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I1114 15:12:42.606456  847956 request.go:629] Waited for 96.201433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:42.606527  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:42.606534  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.606545  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.606558  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.609213  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:42.609231  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.609238  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.609243  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.609248  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.609253  847956 round_trippers.go:580]     Audit-Id: 01f73fe3-188b-49a1-839a-47dd5fa542c6
	I1114 15:12:42.609258  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.609271  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.609732  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:42.610089  847956 pod_ready.go:97] node "multinode-627820" hosting pod "kube-controller-manager-multinode-627820" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:42.610116  847956 pod_ready.go:81] duration metric: took 103.622675ms waiting for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	E1114 15:12:42.610130  847956 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-627820" hosting pod "kube-controller-manager-multinode-627820" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:42.610140  847956 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4hf2k" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:42.806514  847956 request.go:629] Waited for 196.295469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4hf2k
	I1114 15:12:42.806607  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4hf2k
	I1114 15:12:42.806612  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:42.806622  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:42.806628  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:42.809512  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:42.809535  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:42.809542  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:42.809548  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:42.809553  847956 round_trippers.go:580]     Audit-Id: 75a83d61-116e-4562-a148-f3235d6693f4
	I1114 15:12:42.809558  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:42.809563  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:42.809576  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:42.810000  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4hf2k","generateName":"kube-proxy-","namespace":"kube-system","uid":"205bb9ac-4540-41d6-adb8-078c02d91b4e","resourceVersion":"672","creationTimestamp":"2023-11-14T15:04:00Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1114 15:12:43.006889  847956 request.go:629] Waited for 196.410454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:12:43.006973  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:12:43.006980  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:43.006991  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:43.007000  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:43.009584  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:43.009718  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:43.009750  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:43.009760  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:43.009774  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:42 GMT
	I1114 15:12:43.009783  847956 round_trippers.go:580]     Audit-Id: fa6f314c-a48b-4393-ae26-3430f2cb69ee
	I1114 15:12:43.009792  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:43.009802  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:43.009955  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m03","uid":"019405fb-baac-496b-96ae-131218281f18","resourceVersion":"696","creationTimestamp":"2023-11-14T15:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I1114 15:12:43.010257  847956 pod_ready.go:92] pod "kube-proxy-4hf2k" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:43.010274  847956 pod_ready.go:81] duration metric: took 400.119859ms waiting for pod "kube-proxy-4hf2k" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:43.010283  847956 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:43.206772  847956 request.go:629] Waited for 196.4096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:12:43.206890  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:12:43.206900  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:43.206911  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:43.206932  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:43.209672  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:43.209708  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:43.209721  847956 round_trippers.go:580]     Audit-Id: b75be485-d121-42c4-9a47-cf740fd79132
	I1114 15:12:43.209730  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:43.209739  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:43.209748  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:43.209756  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:43.209773  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:43 GMT
	I1114 15:12:43.209965  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6xg9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"2304a457-3a85-4791-8d18-4e1262db399f","resourceVersion":"467","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1114 15:12:43.407067  847956 request.go:629] Waited for 196.381174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:12:43.407151  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:12:43.407159  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:43.407171  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:43.407183  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:43.409827  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:43.409854  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:43.409862  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:43.409871  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:43.409876  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:43.409881  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:43 GMT
	I1114 15:12:43.409886  847956 round_trippers.go:580]     Audit-Id: 6667d94f-c813-4d7a-94fd-c621143b71b6
	I1114 15:12:43.409891  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:43.410031  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"535","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I1114 15:12:43.410393  847956 pod_ready.go:92] pod "kube-proxy-6xg9v" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:43.410412  847956 pod_ready.go:81] duration metric: took 400.121696ms waiting for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:43.410425  847956 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:43.606911  847956 request.go:629] Waited for 196.397479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:12:43.607007  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:12:43.607018  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:43.607026  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:43.607032  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:43.609682  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:43.609706  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:43.609738  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:43.609748  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:43.609760  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:43 GMT
	I1114 15:12:43.609766  847956 round_trippers.go:580]     Audit-Id: 891d5d84-1645-46c7-9d7f-2e41d42baed4
	I1114 15:12:43.609773  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:43.609785  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:43.610166  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m24mc","generateName":"kube-proxy-","namespace":"kube-system","uid":"73a6d4c8-2f95-4818-bc62-566099466b42","resourceVersion":"799","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5513 chars]
	I1114 15:12:43.807049  847956 request.go:629] Waited for 196.285311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:43.807127  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:43.807132  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:43.807140  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:43.807146  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:43.809741  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:43.809771  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:43.809783  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:43.809792  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:43.809800  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:43.809808  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:43 GMT
	I1114 15:12:43.809816  847956 round_trippers.go:580]     Audit-Id: d59c17c3-52c0-4bc8-8087-bfba25890341
	I1114 15:12:43.809827  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:43.809996  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:43.810479  847956 pod_ready.go:97] node "multinode-627820" hosting pod "kube-proxy-m24mc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:43.810511  847956 pod_ready.go:81] duration metric: took 400.074333ms waiting for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	E1114 15:12:43.810524  847956 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-627820" hosting pod "kube-proxy-m24mc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:43.810532  847956 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:44.007045  847956 request.go:629] Waited for 196.400251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:12:44.007134  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:12:44.007139  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:44.007148  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:44.007155  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:44.010657  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:44.010684  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:44.010693  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:43 GMT
	I1114 15:12:44.010700  847956 round_trippers.go:580]     Audit-Id: 034638f6-1dd4-4018-af2e-903b2dba3480
	I1114 15:12:44.010708  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:44.010716  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:44.010723  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:44.010752  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:44.011230  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-627820","namespace":"kube-system","uid":"ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd","resourceVersion":"757","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.mirror":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.seen":"2023-11-14T15:02:19.515750784Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1114 15:12:44.207142  847956 request.go:629] Waited for 195.412776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:44.207261  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:44.207293  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:44.207308  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:44.207320  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:44.209913  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:44.209941  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:44.209948  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:44.209954  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:44 GMT
	I1114 15:12:44.209965  847956 round_trippers.go:580]     Audit-Id: b1db1872-61aa-4c3d-b611-18d50cf9d953
	I1114 15:12:44.209973  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:44.209983  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:44.209991  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:44.210170  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:44.210541  847956 pod_ready.go:97] node "multinode-627820" hosting pod "kube-scheduler-multinode-627820" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:44.210560  847956 pod_ready.go:81] duration metric: took 400.021631ms waiting for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	E1114 15:12:44.210570  847956 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-627820" hosting pod "kube-scheduler-multinode-627820" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-627820" has status "Ready":"False"
	I1114 15:12:44.210582  847956 pod_ready.go:38] duration metric: took 1.729254249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:12:44.210600  847956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:12:44.224432  847956 command_runner.go:130] > -16
	I1114 15:12:44.224462  847956 ops.go:34] apiserver oom_adj: -16
	I1114 15:12:44.224469  847956 kubeadm.go:640] restartCluster took 22.565426631s
	I1114 15:12:44.224482  847956 kubeadm.go:406] StartCluster complete in 22.612928032s
	I1114 15:12:44.224515  847956 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:12:44.224621  847956 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:12:44.225601  847956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:12:44.225850  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:12:44.226081  847956 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:12:44.226209  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:12:44.229363  847956 out.go:177] * Enabled addons: 
	I1114 15:12:44.226216  847956 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:12:44.230841  847956 addons.go:502] enable addons completed in 4.772111ms: enabled=[]
	I1114 15:12:44.229753  847956 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:12:44.231199  847956 round_trippers.go:463] GET https://192.168.39.63:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 15:12:44.231215  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:44.231226  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:44.231234  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:44.234036  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:44.234055  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:44.234062  847956 round_trippers.go:580]     Content-Length: 291
	I1114 15:12:44.234067  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:44 GMT
	I1114 15:12:44.234078  847956 round_trippers.go:580]     Audit-Id: 028b2667-c40d-40f4-9586-751b3ff8f336
	I1114 15:12:44.234087  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:44.234101  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:44.234109  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:44.234118  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:44.234243  847956 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"57bccca2-f0e4-486c-b5a0-3985938d2dae","resourceVersion":"801","creationTimestamp":"2023-11-14T15:02:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1114 15:12:44.234502  847956 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-627820" context rescaled to 1 replicas
	I1114 15:12:44.234544  847956 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:12:44.236220  847956 out.go:177] * Verifying Kubernetes components...
	I1114 15:12:44.237659  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:12:44.374593  847956 command_runner.go:130] > apiVersion: v1
	I1114 15:12:44.374617  847956 command_runner.go:130] > data:
	I1114 15:12:44.374621  847956 command_runner.go:130] >   Corefile: |
	I1114 15:12:44.374625  847956 command_runner.go:130] >     .:53 {
	I1114 15:12:44.374628  847956 command_runner.go:130] >         log
	I1114 15:12:44.374635  847956 command_runner.go:130] >         errors
	I1114 15:12:44.374639  847956 command_runner.go:130] >         health {
	I1114 15:12:44.374644  847956 command_runner.go:130] >            lameduck 5s
	I1114 15:12:44.374647  847956 command_runner.go:130] >         }
	I1114 15:12:44.374652  847956 command_runner.go:130] >         ready
	I1114 15:12:44.374657  847956 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1114 15:12:44.374661  847956 command_runner.go:130] >            pods insecure
	I1114 15:12:44.374667  847956 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1114 15:12:44.374671  847956 command_runner.go:130] >            ttl 30
	I1114 15:12:44.374674  847956 command_runner.go:130] >         }
	I1114 15:12:44.374678  847956 command_runner.go:130] >         prometheus :9153
	I1114 15:12:44.374683  847956 command_runner.go:130] >         hosts {
	I1114 15:12:44.374705  847956 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1114 15:12:44.374712  847956 command_runner.go:130] >            fallthrough
	I1114 15:12:44.374716  847956 command_runner.go:130] >         }
	I1114 15:12:44.374721  847956 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1114 15:12:44.374728  847956 command_runner.go:130] >            max_concurrent 1000
	I1114 15:12:44.374731  847956 command_runner.go:130] >         }
	I1114 15:12:44.374735  847956 command_runner.go:130] >         cache 30
	I1114 15:12:44.374741  847956 command_runner.go:130] >         loop
	I1114 15:12:44.374748  847956 command_runner.go:130] >         reload
	I1114 15:12:44.374752  847956 command_runner.go:130] >         loadbalance
	I1114 15:12:44.374755  847956 command_runner.go:130] >     }
	I1114 15:12:44.374759  847956 command_runner.go:130] > kind: ConfigMap
	I1114 15:12:44.374762  847956 command_runner.go:130] > metadata:
	I1114 15:12:44.374767  847956 command_runner.go:130] >   creationTimestamp: "2023-11-14T15:02:19Z"
	I1114 15:12:44.374773  847956 command_runner.go:130] >   name: coredns
	I1114 15:12:44.374777  847956 command_runner.go:130] >   namespace: kube-system
	I1114 15:12:44.374781  847956 command_runner.go:130] >   resourceVersion: "359"
	I1114 15:12:44.374786  847956 command_runner.go:130] >   uid: 4cf214f8-5e9c-406e-819b-2e5b336d9fc3
	I1114 15:12:44.377324  847956 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 15:12:44.377352  847956 node_ready.go:35] waiting up to 6m0s for node "multinode-627820" to be "Ready" ...
	I1114 15:12:44.406776  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:44.406809  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:44.406824  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:44.406835  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:44.409767  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:44.409789  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:44.409807  847956 round_trippers.go:580]     Audit-Id: 55101371-6385-4b77-a405-708b3025b4f2
	I1114 15:12:44.409816  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:44.409824  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:44.409832  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:44.409840  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:44.409850  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:44 GMT
	I1114 15:12:44.410254  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:44.607296  847956 request.go:629] Waited for 196.407765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:44.607393  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:44.607401  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:44.607409  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:44.607418  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:44.610063  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:44.610094  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:44.610105  847956 round_trippers.go:580]     Audit-Id: af84ed3f-4d21-45ac-8802-72d9a8bd6cfc
	I1114 15:12:44.610113  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:44.610122  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:44.610131  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:44.610141  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:44.610149  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:44 GMT
	I1114 15:12:44.610306  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:45.111473  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:45.111503  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:45.111511  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:45.111517  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:45.114520  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:45.114548  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:45.114559  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:45.114567  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:45.114576  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:45.114585  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:45 GMT
	I1114 15:12:45.114652  847956 round_trippers.go:580]     Audit-Id: 3d4289be-aec7-48e6-b48c-2f8f78360fae
	I1114 15:12:45.114670  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:45.114807  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:45.611837  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:45.611865  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:45.611874  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:45.611880  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:45.614746  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:45.614770  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:45.614781  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:45 GMT
	I1114 15:12:45.614789  847956 round_trippers.go:580]     Audit-Id: 38670576-cd80-432a-86e1-33105509c405
	I1114 15:12:45.614796  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:45.614813  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:45.614822  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:45.614835  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:45.615001  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:46.111711  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:46.111739  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:46.111751  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:46.111758  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:46.115246  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:46.115276  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:46.115286  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:46.115294  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:46.115300  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:46.115306  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:46 GMT
	I1114 15:12:46.115311  847956 round_trippers.go:580]     Audit-Id: 61c68036-17e6-47ff-9014-22b121a617f5
	I1114 15:12:46.115319  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:46.115516  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:46.610970  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:46.611011  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:46.611021  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:46.611027  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:46.614366  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:46.614391  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:46.614401  847956 round_trippers.go:580]     Audit-Id: ac390e38-7b60-47e8-819c-73fd74949e2c
	I1114 15:12:46.614411  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:46.614419  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:46.614428  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:46.614434  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:46.614440  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:46 GMT
	I1114 15:12:46.615119  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:46.615511  847956 node_ready.go:58] node "multinode-627820" has status "Ready":"False"
	I1114 15:12:47.111882  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:47.111914  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:47.111927  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:47.111936  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:47.115124  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:47.115154  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:47.115166  847956 round_trippers.go:580]     Audit-Id: 63ffea63-3bb4-474b-a2b6-0bdd014215c4
	I1114 15:12:47.115175  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:47.115186  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:47.115200  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:47.115206  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:47.115212  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:47 GMT
	I1114 15:12:47.115554  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:47.611182  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:47.611211  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:47.611220  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:47.611227  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:47.615055  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:47.615084  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:47.615095  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:47.615102  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:47.615108  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:47.615113  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:47.615118  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:47 GMT
	I1114 15:12:47.615123  847956 round_trippers.go:580]     Audit-Id: 26e51a7a-2353-4d40-a3a5-b7572cf14ecf
	I1114 15:12:47.615377  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:48.111007  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:48.111037  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:48.111049  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:48.111058  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:48.113836  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:48.113862  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:48.113869  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:48.113875  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:48.113880  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:48.113885  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:48.113906  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:48 GMT
	I1114 15:12:48.113912  847956 round_trippers.go:580]     Audit-Id: 7a3f94dc-1f0b-42a5-a0fc-a9fcad10753c
	I1114 15:12:48.114281  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:48.610972  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:48.611006  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:48.611018  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:48.611025  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:48.613959  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:48.613982  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:48.613993  847956 round_trippers.go:580]     Audit-Id: 1d330c42-da39-4438-b914-46a549ac0c5d
	I1114 15:12:48.613999  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:48.614004  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:48.614009  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:48.614014  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:48.614019  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:48 GMT
	I1114 15:12:48.614636  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:49.111823  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:49.111850  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:49.111861  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:49.111867  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:49.115041  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:49.115071  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:49.115080  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:49 GMT
	I1114 15:12:49.115088  847956 round_trippers.go:580]     Audit-Id: 88199573-8575-4900-8da7-b469fd1a24e7
	I1114 15:12:49.115096  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:49.115104  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:49.115112  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:49.115120  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:49.115301  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"694","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1114 15:12:49.115693  847956 node_ready.go:58] node "multinode-627820" has status "Ready":"False"
	I1114 15:12:49.610985  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:49.611010  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:49.611019  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:49.611025  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:49.613297  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:49.613321  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:49.613329  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:49 GMT
	I1114 15:12:49.613341  847956 round_trippers.go:580]     Audit-Id: 2e7876e8-c704-4cc4-89e3-d84c0f801633
	I1114 15:12:49.613349  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:49.613358  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:49.613365  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:49.613377  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:49.613509  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:49.613862  847956 node_ready.go:49] node "multinode-627820" has status "Ready":"True"
	I1114 15:12:49.613880  847956 node_ready.go:38] duration metric: took 5.23650334s waiting for node "multinode-627820" to be "Ready" ...
	I1114 15:12:49.613889  847956 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:12:49.613957  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:12:49.613967  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:49.613974  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:49.613981  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:49.618289  847956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 15:12:49.618318  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:49.618327  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:49 GMT
	I1114 15:12:49.618345  847956 round_trippers.go:580]     Audit-Id: 8e9b04a8-dc16-426b-8a84-62857173ee5c
	I1114 15:12:49.618356  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:49.618368  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:49.618377  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:49.618388  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:49.620190  847956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"823"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82428 chars]
	I1114 15:12:49.622713  847956 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:49.622821  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:49.622839  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:49.622850  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:49.622863  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:49.624869  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:49.624891  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:49.624904  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:49.624921  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:49 GMT
	I1114 15:12:49.624931  847956 round_trippers.go:580]     Audit-Id: 8d768e05-dc65-4d9d-be7e-6a56a8575e27
	I1114 15:12:49.624939  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:49.624958  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:49.624971  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:49.625118  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:49.625630  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:49.625646  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:49.625653  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:49.625659  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:49.630279  847956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 15:12:49.630302  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:49.630312  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:49 GMT
	I1114 15:12:49.630320  847956 round_trippers.go:580]     Audit-Id: 73e3f3f0-e7e2-4b8b-bfd7-12b81011bdd7
	I1114 15:12:49.630328  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:49.630346  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:49.630360  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:49.630368  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:49.630511  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:49.631048  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:49.631065  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:49.631072  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:49.631078  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:49.633526  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:49.633548  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:49.633557  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:49.633565  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:49.633573  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:49.633582  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:49.633594  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:49 GMT
	I1114 15:12:49.633603  847956 round_trippers.go:580]     Audit-Id: 54a5adf3-ce17-4c92-b3e3-c47e3049f37b
	I1114 15:12:49.633787  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:49.634269  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:49.634286  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:49.634293  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:49.634299  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:49.636120  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:49.636141  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:49.636150  847956 round_trippers.go:580]     Audit-Id: 817b869d-7652-48d6-b22a-095748cd8f20
	I1114 15:12:49.636158  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:49.636167  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:49.636176  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:49.636187  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:49.636194  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:49 GMT
	I1114 15:12:49.636319  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:50.137605  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:50.137643  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:50.137656  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:50.137665  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:50.140563  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:50.140588  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:50.140597  847956 round_trippers.go:580]     Audit-Id: 58e3daa4-cfa0-48e8-b418-d167f5a06dfd
	I1114 15:12:50.140607  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:50.140617  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:50.140626  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:50.140639  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:50.140645  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:50 GMT
	I1114 15:12:50.140984  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:50.141612  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:50.141630  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:50.141641  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:50.141652  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:50.143880  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:50.143898  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:50.143907  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:50.143915  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:50.143923  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:50.143935  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:50.143947  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:50 GMT
	I1114 15:12:50.143959  847956 round_trippers.go:580]     Audit-Id: 078728d5-6ed9-4ea4-972d-178c20548179
	I1114 15:12:50.144282  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:50.637119  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:50.637151  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:50.637176  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:50.637185  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:50.640706  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:50.640729  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:50.640756  847956 round_trippers.go:580]     Audit-Id: 7d0a6c52-921f-426e-95be-50390afb1801
	I1114 15:12:50.640767  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:50.640775  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:50.640784  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:50.640794  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:50.640802  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:50 GMT
	I1114 15:12:50.640984  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:50.641602  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:50.641619  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:50.641630  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:50.641640  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:50.644758  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:50.644806  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:50.644819  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:50.644830  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:50.644839  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:50 GMT
	I1114 15:12:50.644847  847956 round_trippers.go:580]     Audit-Id: 9fa401ba-61c3-4e89-b049-26b052cf60c9
	I1114 15:12:50.644853  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:50.644859  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:50.645135  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:51.136962  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:51.136995  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:51.137008  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:51.137017  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:51.140592  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:51.140613  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:51.140622  847956 round_trippers.go:580]     Audit-Id: f116062a-0fe6-4497-a777-5d3028639fb7
	I1114 15:12:51.140629  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:51.140634  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:51.140639  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:51.140657  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:51.140666  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:51 GMT
	I1114 15:12:51.141223  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:51.141727  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:51.141744  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:51.141755  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:51.141765  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:51.143942  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:51.143963  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:51.143970  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:51.143975  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:51.143981  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:51.143985  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:51 GMT
	I1114 15:12:51.143990  847956 round_trippers.go:580]     Audit-Id: 89624bcd-66b4-472b-af8b-c6a4a6c1d193
	I1114 15:12:51.143998  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:51.144280  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:51.636975  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:51.637011  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:51.637024  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:51.637033  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:51.640806  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:51.640832  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:51.640842  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:51.640851  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:51.640858  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:51.640866  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:51.640874  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:51 GMT
	I1114 15:12:51.640882  847956 round_trippers.go:580]     Audit-Id: a0d75e73-9111-42ac-b4dc-2d12ccffd48e
	I1114 15:12:51.641302  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:51.641884  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:51.641905  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:51.641913  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:51.641918  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:51.644213  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:51.644233  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:51.644243  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:51.644251  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:51.644259  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:51.644267  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:51.644281  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:51 GMT
	I1114 15:12:51.644295  847956 round_trippers.go:580]     Audit-Id: 9b5ebd90-cb35-43ab-953b-ad1c74eeba71
	I1114 15:12:51.644592  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:51.644914  847956 pod_ready.go:102] pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace has status "Ready":"False"
	I1114 15:12:52.137456  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:52.137480  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:52.137488  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:52.137494  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:52.141446  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:52.141472  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:52.141479  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:52 GMT
	I1114 15:12:52.141485  847956 round_trippers.go:580]     Audit-Id: 7491faad-d43d-4377-9e0b-bee6a3adc7b2
	I1114 15:12:52.141490  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:52.141507  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:52.141513  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:52.141518  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:52.142529  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:52.143273  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:52.143296  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:52.143309  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:52.143318  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:52.147155  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:52.147175  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:52.147182  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:52.147194  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:52.147199  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:52.147204  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:52 GMT
	I1114 15:12:52.147209  847956 round_trippers.go:580]     Audit-Id: b55b50aa-5e1e-4223-810e-16546e914aa9
	I1114 15:12:52.147214  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:52.147440  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:52.637030  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:52.637061  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:52.637073  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:52.637079  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:52.640120  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:52.640142  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:52.640149  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:52.640154  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:52.640167  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:52.640176  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:52.640184  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:52 GMT
	I1114 15:12:52.640192  847956 round_trippers.go:580]     Audit-Id: 820b0042-58b5-495a-956b-157570037868
	I1114 15:12:52.640373  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:52.641050  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:52.641070  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:52.641081  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:52.641089  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:52.643963  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:52.643996  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:52.644007  847956 round_trippers.go:580]     Audit-Id: e8e6793a-c179-47e2-8c5b-cc6ef090098d
	I1114 15:12:52.644013  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:52.644021  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:52.644026  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:52.644034  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:52.644039  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:52 GMT
	I1114 15:12:52.644246  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:53.137896  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:53.137924  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:53.137933  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:53.137939  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:53.140848  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:53.140870  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:53.140877  847956 round_trippers.go:580]     Audit-Id: a57df441-bbd7-4441-98ab-724f7c1c3f61
	I1114 15:12:53.140882  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:53.140887  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:53.140896  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:53.140903  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:53.140909  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:53 GMT
	I1114 15:12:53.141236  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:53.141785  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:53.141801  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:53.141810  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:53.141822  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:53.144095  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:53.144118  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:53.144128  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:53.144137  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:53.144144  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:53.144151  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:53 GMT
	I1114 15:12:53.144159  847956 round_trippers.go:580]     Audit-Id: 1bde9429-b4ab-4183-af38-f15893f4c9b0
	I1114 15:12:53.144171  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:53.144464  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:53.637134  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:53.637170  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:53.637182  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:53.637190  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:53.640012  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:53.640034  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:53.640041  847956 round_trippers.go:580]     Audit-Id: 8f959d0d-d021-4064-8cc8-7aa1b5d8d8da
	I1114 15:12:53.640047  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:53.640052  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:53.640057  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:53.640063  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:53.640068  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:53 GMT
	I1114 15:12:53.640275  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:53.640804  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:53.640821  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:53.640829  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:53.640841  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:53.643008  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:53.643024  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:53.643030  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:53.643036  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:53.643040  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:53.643045  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:53 GMT
	I1114 15:12:53.643050  847956 round_trippers.go:580]     Audit-Id: 675f31d6-e33d-439e-be6a-95f679827d1d
	I1114 15:12:53.643055  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:53.643277  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:54.137606  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:54.137635  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:54.137644  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:54.137650  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:54.141089  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:54.141111  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:54.141120  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:54.141126  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:54.141133  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:54.141141  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:54.141149  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:54 GMT
	I1114 15:12:54.141157  847956 round_trippers.go:580]     Audit-Id: 4dd7dcb2-1321-4804-bd57-96baed0f7e02
	I1114 15:12:54.141313  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:54.141805  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:54.141823  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:54.141831  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:54.141837  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:54.144111  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:54.144139  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:54.144147  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:54.144153  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:54.144158  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:54.144164  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:54.144169  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:54 GMT
	I1114 15:12:54.144174  847956 round_trippers.go:580]     Audit-Id: e9bd303e-4b9f-464c-9155-1ce7b57a91d1
	I1114 15:12:54.144372  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:54.144769  847956 pod_ready.go:102] pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace has status "Ready":"False"
	I1114 15:12:54.636972  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:54.637001  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:54.637010  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:54.637024  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:54.639715  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:54.639735  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:54.639742  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:54.639754  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:54.639762  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:54 GMT
	I1114 15:12:54.639769  847956 round_trippers.go:580]     Audit-Id: a5990c95-dfcb-4bed-941d-9747ec2157d8
	I1114 15:12:54.639776  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:54.639784  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:54.640445  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:54.641078  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:54.641101  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:54.641112  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:54.641123  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:54.643611  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:54.643631  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:54.643647  847956 round_trippers.go:580]     Audit-Id: 87a88eda-f2a7-4569-addb-2f10a02abc44
	I1114 15:12:54.643656  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:54.643665  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:54.643675  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:54.643695  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:54.643708  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:54 GMT
	I1114 15:12:54.643847  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:55.137641  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:55.137678  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:55.137692  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:55.137703  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:55.140340  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:55.140371  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:55.140380  847956 round_trippers.go:580]     Audit-Id: 0918d9e6-56cd-4374-9162-764675313b25
	I1114 15:12:55.140389  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:55.140398  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:55.140407  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:55.140417  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:55.140426  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:55 GMT
	I1114 15:12:55.140625  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:55.141266  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:55.141287  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:55.141299  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:55.141309  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:55.143360  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:55.143381  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:55.143391  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:55.143400  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:55.143408  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:55.143415  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:55 GMT
	I1114 15:12:55.143420  847956 round_trippers.go:580]     Audit-Id: 884028e2-d43b-4329-90c3-a7dd47980f77
	I1114 15:12:55.143430  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:55.143586  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:55.637503  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:55.637534  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:55.637555  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:55.637573  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:55.641216  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:55.641237  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:55.641244  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:55.641250  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:55.641255  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:55.641261  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:55.641267  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:55 GMT
	I1114 15:12:55.641272  847956 round_trippers.go:580]     Audit-Id: aae6e886-6ccc-4d17-b61e-136303a491a0
	I1114 15:12:55.641422  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:55.641914  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:55.641935  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:55.641947  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:55.641964  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:55.644596  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:55.644620  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:55.644630  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:55 GMT
	I1114 15:12:55.644639  847956 round_trippers.go:580]     Audit-Id: 68eddf32-adb4-4678-a5e9-7b6bc36f1231
	I1114 15:12:55.644650  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:55.644661  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:55.644672  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:55.644683  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:55.645028  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:56.136955  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:56.136979  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:56.136988  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:56.136994  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:56.140191  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:56.140221  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:56.140233  847956 round_trippers.go:580]     Audit-Id: 63f95552-7590-4901-8997-808a14b6331a
	I1114 15:12:56.140242  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:56.140250  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:56.140259  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:56.140271  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:56.140280  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:56 GMT
	I1114 15:12:56.140452  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:56.141170  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:56.141194  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:56.141207  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:56.141218  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:56.143547  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:56.143571  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:56.143581  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:56.143591  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:56.143601  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:56 GMT
	I1114 15:12:56.143611  847956 round_trippers.go:580]     Audit-Id: 3eab85db-f605-4a10-8444-97b9ab078304
	I1114 15:12:56.143620  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:56.143634  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:56.143786  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:56.637575  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:56.637612  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:56.637625  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:56.637634  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:56.643065  847956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 15:12:56.643087  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:56.643094  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:56.643100  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:56.643105  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:56.643110  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:56 GMT
	I1114 15:12:56.643115  847956 round_trippers.go:580]     Audit-Id: 95c66671-e5ed-48ac-a847-2d7dd90941bc
	I1114 15:12:56.643122  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:56.643474  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:56.643984  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:56.644001  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:56.644008  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:56.644014  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:56.646893  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:56.646913  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:56.646932  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:56.646941  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:56.646949  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:56.646964  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:56 GMT
	I1114 15:12:56.646971  847956 round_trippers.go:580]     Audit-Id: d6b6d82d-45d0-42ab-aec9-7c957f61ada7
	I1114 15:12:56.646981  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:56.647151  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:56.647496  847956 pod_ready.go:102] pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace has status "Ready":"False"
	I1114 15:12:57.137986  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:57.138030  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.138045  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.138056  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.140794  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:57.140817  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.140827  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.140836  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.140845  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.140855  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.140869  847956 round_trippers.go:580]     Audit-Id: 6adb41b7-cc8a-483d-a123-a158fa5bbb8b
	I1114 15:12:57.140883  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.141406  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"756","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1114 15:12:57.141975  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:57.141994  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.142006  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.142016  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.144418  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:57.144436  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.144445  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.144453  847956 round_trippers.go:580]     Audit-Id: c8acf537-fd30-49ed-93aa-cc08107fed62
	I1114 15:12:57.144460  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.144468  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.144477  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.144488  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.144633  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:57.637249  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:12:57.637283  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.637295  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.637305  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.640848  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:57.640875  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.640886  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.640894  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.640902  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.640913  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.640921  847956 round_trippers.go:580]     Audit-Id: 40856208-1c7b-4d68-831b-0db039dc5a6d
	I1114 15:12:57.640932  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.641219  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"851","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1114 15:12:57.641672  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:57.641684  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.641691  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.641698  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.643743  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:57.643767  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.643786  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.643792  847956 round_trippers.go:580]     Audit-Id: d84f5857-4807-4732-8097-e94c7b759316
	I1114 15:12:57.643797  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.643802  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.643812  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.643821  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.644174  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:57.644523  847956 pod_ready.go:92] pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:57.644546  847956 pod_ready.go:81] duration metric: took 8.021805981s waiting for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.644556  847956 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.644609  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-627820
	I1114 15:12:57.644617  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.644625  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.644630  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.646909  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:57.646926  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.646935  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.646943  847956 round_trippers.go:580]     Audit-Id: bc632e78-04fe-4245-bf86-0c160e567215
	I1114 15:12:57.646952  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.646962  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.646974  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.646985  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.647366  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-627820","namespace":"kube-system","uid":"f7ab1cba-820a-4cad-8607-dcf55b587b77","resourceVersion":"817","creationTimestamp":"2023-11-14T15:02:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.63:2379","kubernetes.io/config.hash":"9e94d5d69871d944e272883491976489","kubernetes.io/config.mirror":"9e94d5d69871d944e272883491976489","kubernetes.io/config.seen":"2023-11-14T15:02:10.404956486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1114 15:12:57.647715  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:57.647729  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.647739  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.647748  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.650074  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:57.650098  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.650108  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.650117  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.650124  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.650133  847956 round_trippers.go:580]     Audit-Id: 06f1c1f9-e6cb-4317-86c1-9693ef62bdcf
	I1114 15:12:57.650140  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.650151  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.650300  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:57.650554  847956 pod_ready.go:92] pod "etcd-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:57.650567  847956 pod_ready.go:81] duration metric: took 6.005259ms waiting for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.650581  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.650628  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-627820
	I1114 15:12:57.650635  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.650642  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.650648  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.652329  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:57.652352  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.652361  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.652369  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.652377  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.652384  847956 round_trippers.go:580]     Audit-Id: f717e8bf-5a36-404d-ba42-a2f5e176147d
	I1114 15:12:57.652392  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.652399  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.652573  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-627820","namespace":"kube-system","uid":"8a9b9224-3446-46f7-b525-e1f32bb9a33c","resourceVersion":"826","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.63:8443","kubernetes.io/config.hash":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.mirror":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.seen":"2023-11-14T15:02:19.515752674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1114 15:12:57.653041  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:57.653058  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.653070  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.653077  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.655064  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:57.655085  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.655094  847956 round_trippers.go:580]     Audit-Id: fd31b849-691d-4b09-936a-a82cc63ed0c8
	I1114 15:12:57.655102  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.655110  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.655121  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.655129  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.655140  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.655274  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:57.655557  847956 pod_ready.go:92] pod "kube-apiserver-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:57.655571  847956 pod_ready.go:81] duration metric: took 4.984292ms waiting for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.655579  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.655630  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-627820
	I1114 15:12:57.655637  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.655644  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.655650  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.657366  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:57.657382  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.657388  847956 round_trippers.go:580]     Audit-Id: 2317a3ed-3162-4c2e-966f-9cda4d29c71b
	I1114 15:12:57.657393  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.657398  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.657404  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.657412  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.657424  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.657606  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-627820","namespace":"kube-system","uid":"b4440d06-27f9-4455-ae59-2d8c744b99a2","resourceVersion":"816","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.mirror":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.seen":"2023-11-14T15:02:19.515747223Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1114 15:12:57.657949  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:57.657964  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.657970  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.657976  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.659693  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:57.659710  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.659720  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.659729  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.659735  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.659743  847956 round_trippers.go:580]     Audit-Id: d68b4484-cef5-40ec-894d-03d175b492e5
	I1114 15:12:57.659748  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.659756  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.660142  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:57.660400  847956 pod_ready.go:92] pod "kube-controller-manager-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:57.660414  847956 pod_ready.go:81] duration metric: took 4.829771ms waiting for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.660423  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4hf2k" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.660461  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4hf2k
	I1114 15:12:57.660468  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.660475  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.660481  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.663238  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:57.663257  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.663272  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.663280  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.663286  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.663299  847956 round_trippers.go:580]     Audit-Id: ec864f1b-5584-490f-abff-1666ceaccd14
	I1114 15:12:57.663310  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.663315  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.663466  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4hf2k","generateName":"kube-proxy-","namespace":"kube-system","uid":"205bb9ac-4540-41d6-adb8-078c02d91b4e","resourceVersion":"672","creationTimestamp":"2023-11-14T15:04:00Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1114 15:12:57.663777  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:12:57.663787  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.663793  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.663799  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.665485  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:57.665506  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.665514  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.665519  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.665526  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.665531  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.665537  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.665542  847956 round_trippers.go:580]     Audit-Id: c2ba1df3-fabd-4dfb-80e6-e3ab6b9fe9eb
	I1114 15:12:57.665715  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m03","uid":"019405fb-baac-496b-96ae-131218281f18","resourceVersion":"830","creationTimestamp":"2023-11-14T15:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1114 15:12:57.665905  847956 pod_ready.go:92] pod "kube-proxy-4hf2k" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:57.665916  847956 pod_ready.go:81] duration metric: took 5.487026ms waiting for pod "kube-proxy-4hf2k" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.665924  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:57.837299  847956 request.go:629] Waited for 171.303276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:12:57.837384  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:12:57.837389  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:57.837397  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:57.837406  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:57.840450  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:57.840472  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:57.840480  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:57.840488  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:57.840496  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:57.840505  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:57 GMT
	I1114 15:12:57.840512  847956 round_trippers.go:580]     Audit-Id: 4a6f9174-f37c-4cbf-b87f-9d41f01d0317
	I1114 15:12:57.840519  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:57.840654  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6xg9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"2304a457-3a85-4791-8d18-4e1262db399f","resourceVersion":"467","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1114 15:12:58.037450  847956 request.go:629] Waited for 196.319892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:12:58.037534  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:12:58.037539  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:58.037548  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:58.037554  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:58.040496  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:58.040519  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:58.040528  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:58.040536  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:58.040544  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:58.040551  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:58 GMT
	I1114 15:12:58.040557  847956 round_trippers.go:580]     Audit-Id: 26b531f8-de9f-4cd6-ab90-eee24694dbea
	I1114 15:12:58.040565  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:58.041005  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"744755ad-0aac-4230-b688-92b3600f60d7","resourceVersion":"812","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I1114 15:12:58.041315  847956 pod_ready.go:92] pod "kube-proxy-6xg9v" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:58.041334  847956 pod_ready.go:81] duration metric: took 375.404117ms waiting for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:58.041344  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:58.237377  847956 request.go:629] Waited for 195.954738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:12:58.237490  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:12:58.237498  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:58.237509  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:58.237524  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:58.240522  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:58.240550  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:58.240562  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:58.240571  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:58.240579  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:58.240587  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:58.240595  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:58 GMT
	I1114 15:12:58.240609  847956 round_trippers.go:580]     Audit-Id: de71231e-86f4-4996-a838-e36ff87baf1a
	I1114 15:12:58.240840  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m24mc","generateName":"kube-proxy-","namespace":"kube-system","uid":"73a6d4c8-2f95-4818-bc62-566099466b42","resourceVersion":"799","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5513 chars]
	I1114 15:12:58.437750  847956 request.go:629] Waited for 196.425346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:58.437872  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:58.437883  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:58.437891  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:58.437897  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:58.441020  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:12:58.441043  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:58.441052  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:58.441060  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:58.441073  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:58.441080  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:58.441087  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:58 GMT
	I1114 15:12:58.441096  847956 round_trippers.go:580]     Audit-Id: 8f876b85-e48b-4e27-b75d-d559b7f287cd
	I1114 15:12:58.441643  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:58.442030  847956 pod_ready.go:92] pod "kube-proxy-m24mc" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:58.442051  847956 pod_ready.go:81] duration metric: took 400.700143ms waiting for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:58.442065  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:58.637463  847956 request.go:629] Waited for 195.304894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:12:58.637566  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:12:58.637574  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:58.637589  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:58.637606  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:58.640426  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:58.640468  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:58.640479  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:58.640488  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:58.640495  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:58.640504  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:58.640512  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:58 GMT
	I1114 15:12:58.640521  847956 round_trippers.go:580]     Audit-Id: 9a681462-0ca1-49c3-bc36-404613f69a03
	I1114 15:12:58.640715  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-627820","namespace":"kube-system","uid":"ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd","resourceVersion":"843","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.mirror":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.seen":"2023-11-14T15:02:19.515750784Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1114 15:12:58.837435  847956 request.go:629] Waited for 196.202869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:58.837511  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:12:58.837515  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:58.837523  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:58.837529  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:58.840109  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:58.840132  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:58.840142  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:58.840149  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:58.840157  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:58.840166  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:58 GMT
	I1114 15:12:58.840179  847956 round_trippers.go:580]     Audit-Id: ea6a823d-d356-4a65-aed2-6e04cca3ac2a
	I1114 15:12:58.840192  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:58.840349  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1114 15:12:58.840805  847956 pod_ready.go:92] pod "kube-scheduler-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:12:58.840828  847956 pod_ready.go:81] duration metric: took 398.747745ms waiting for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:12:58.840846  847956 pod_ready.go:38] duration metric: took 9.226944463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:12:58.840867  847956 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:12:58.840942  847956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:12:58.855426  847956 command_runner.go:130] > 1083
	I1114 15:12:58.855476  847956 api_server.go:72] duration metric: took 14.620899701s to wait for apiserver process to appear ...
	I1114 15:12:58.855488  847956 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:12:58.855506  847956 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I1114 15:12:58.861807  847956 api_server.go:279] https://192.168.39.63:8443/healthz returned 200:
	ok
	I1114 15:12:58.861892  847956 round_trippers.go:463] GET https://192.168.39.63:8443/version
	I1114 15:12:58.861904  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:58.861916  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:58.861927  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:58.863307  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:12:58.863339  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:58.863349  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:58.863358  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:58.863367  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:58.863374  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:58.863387  847956 round_trippers.go:580]     Content-Length: 264
	I1114 15:12:58.863398  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:58 GMT
	I1114 15:12:58.863409  847956 round_trippers.go:580]     Audit-Id: 742df6ab-19f0-4279-865d-83f107fe9cd8
	I1114 15:12:58.863476  847956 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1114 15:12:58.863547  847956 api_server.go:141] control plane version: v1.28.3
	I1114 15:12:58.863570  847956 api_server.go:131] duration metric: took 8.074572ms to wait for apiserver health ...
	I1114 15:12:58.863582  847956 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:12:59.038082  847956 request.go:629] Waited for 174.386142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:12:59.038167  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:12:59.038174  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:59.038183  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:59.038190  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:59.043020  847956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 15:12:59.043052  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:59.043061  847956 round_trippers.go:580]     Audit-Id: 7cfffdc8-88a8-49be-b64f-400980c6d638
	I1114 15:12:59.043067  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:59.043075  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:59.043084  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:59.043092  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:59.043105  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:59 GMT
	I1114 15:12:59.044652  847956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"851","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81835 chars]
	I1114 15:12:59.048271  847956 system_pods.go:59] 12 kube-system pods found
	I1114 15:12:59.048306  847956 system_pods.go:61] "coredns-5dd5756b68-vh8ng" [25afe3b4-014e-4180-9597-fb237d622c81] Running
	I1114 15:12:59.048314  847956 system_pods.go:61] "etcd-multinode-627820" [f7ab1cba-820a-4cad-8607-dcf55b587b77] Running
	I1114 15:12:59.048323  847956 system_pods.go:61] "kindnet-2d26z" [0ca83d6c-6208-49c7-b979-775971913b25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1114 15:12:59.048336  847956 system_pods.go:61] "kindnet-8wr7d" [d43cbd11-a37d-4e27-85b3-47ede6e9516b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1114 15:12:59.048351  847956 system_pods.go:61] "kindnet-f8xnr" [457f993f-4895-488a-8277-d5187afda5d3] Running
	I1114 15:12:59.048359  847956 system_pods.go:61] "kube-apiserver-multinode-627820" [8a9b9224-3446-46f7-b525-e1f32bb9a33c] Running
	I1114 15:12:59.048373  847956 system_pods.go:61] "kube-controller-manager-multinode-627820" [b4440d06-27f9-4455-ae59-2d8c744b99a2] Running
	I1114 15:12:59.048384  847956 system_pods.go:61] "kube-proxy-4hf2k" [205bb9ac-4540-41d6-adb8-078c02d91b4e] Running
	I1114 15:12:59.048395  847956 system_pods.go:61] "kube-proxy-6xg9v" [2304a457-3a85-4791-8d18-4e1262db399f] Running
	I1114 15:12:59.048406  847956 system_pods.go:61] "kube-proxy-m24mc" [73a6d4c8-2f95-4818-bc62-566099466b42] Running
	I1114 15:12:59.048422  847956 system_pods.go:61] "kube-scheduler-multinode-627820" [ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd] Running
	I1114 15:12:59.048434  847956 system_pods.go:61] "storage-provisioner" [f9cf343d-66fc-4de5-b0e0-df38ace21868] Running
	I1114 15:12:59.048446  847956 system_pods.go:74] duration metric: took 184.853959ms to wait for pod list to return data ...
	I1114 15:12:59.048461  847956 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:12:59.237917  847956 request.go:629] Waited for 189.367928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/default/serviceaccounts
	I1114 15:12:59.237997  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/default/serviceaccounts
	I1114 15:12:59.238003  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:59.238011  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:59.238019  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:59.240911  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:59.240936  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:59.240945  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:59 GMT
	I1114 15:12:59.240951  847956 round_trippers.go:580]     Audit-Id: 4a5726ab-c88b-47ef-91be-77a065dd263b
	I1114 15:12:59.240961  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:59.240970  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:59.240977  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:59.240984  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:59.240991  847956 round_trippers.go:580]     Content-Length: 261
	I1114 15:12:59.241046  847956 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"43d16d63-7ee4-4137-b6e0-aa3fd01e445d","resourceVersion":"329","creationTimestamp":"2023-11-14T15:02:31Z"}}]}
	I1114 15:12:59.241312  847956 default_sa.go:45] found service account: "default"
	I1114 15:12:59.241337  847956 default_sa.go:55] duration metric: took 192.866617ms for default service account to be created ...
	I1114 15:12:59.241347  847956 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:12:59.437799  847956 request.go:629] Waited for 196.36836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:12:59.437895  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:12:59.437905  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:59.437921  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:59.437933  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:59.443050  847956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 15:12:59.443085  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:59.443096  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:59.443105  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:59.443113  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:59 GMT
	I1114 15:12:59.443122  847956 round_trippers.go:580]     Audit-Id: f861ddfa-075b-4270-bb62-e7dd18bbb136
	I1114 15:12:59.443145  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:59.443154  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:59.444584  847956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"851","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81835 chars]
	I1114 15:12:59.447081  847956 system_pods.go:86] 12 kube-system pods found
	I1114 15:12:59.447111  847956 system_pods.go:89] "coredns-5dd5756b68-vh8ng" [25afe3b4-014e-4180-9597-fb237d622c81] Running
	I1114 15:12:59.447120  847956 system_pods.go:89] "etcd-multinode-627820" [f7ab1cba-820a-4cad-8607-dcf55b587b77] Running
	I1114 15:12:59.447132  847956 system_pods.go:89] "kindnet-2d26z" [0ca83d6c-6208-49c7-b979-775971913b25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1114 15:12:59.447143  847956 system_pods.go:89] "kindnet-8wr7d" [d43cbd11-a37d-4e27-85b3-47ede6e9516b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1114 15:12:59.447153  847956 system_pods.go:89] "kindnet-f8xnr" [457f993f-4895-488a-8277-d5187afda5d3] Running
	I1114 15:12:59.447163  847956 system_pods.go:89] "kube-apiserver-multinode-627820" [8a9b9224-3446-46f7-b525-e1f32bb9a33c] Running
	I1114 15:12:59.447174  847956 system_pods.go:89] "kube-controller-manager-multinode-627820" [b4440d06-27f9-4455-ae59-2d8c744b99a2] Running
	I1114 15:12:59.447183  847956 system_pods.go:89] "kube-proxy-4hf2k" [205bb9ac-4540-41d6-adb8-078c02d91b4e] Running
	I1114 15:12:59.447191  847956 system_pods.go:89] "kube-proxy-6xg9v" [2304a457-3a85-4791-8d18-4e1262db399f] Running
	I1114 15:12:59.447199  847956 system_pods.go:89] "kube-proxy-m24mc" [73a6d4c8-2f95-4818-bc62-566099466b42] Running
	I1114 15:12:59.447208  847956 system_pods.go:89] "kube-scheduler-multinode-627820" [ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd] Running
	I1114 15:12:59.447216  847956 system_pods.go:89] "storage-provisioner" [f9cf343d-66fc-4de5-b0e0-df38ace21868] Running
	I1114 15:12:59.447227  847956 system_pods.go:126] duration metric: took 205.873781ms to wait for k8s-apps to be running ...
	I1114 15:12:59.447245  847956 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:12:59.447307  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:12:59.463317  847956 system_svc.go:56] duration metric: took 16.063352ms WaitForService to wait for kubelet.
	I1114 15:12:59.463351  847956 kubeadm.go:581] duration metric: took 15.228776201s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:12:59.463380  847956 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:12:59.637864  847956 request.go:629] Waited for 174.371368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes
	I1114 15:12:59.637938  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes
	I1114 15:12:59.637944  847956 round_trippers.go:469] Request Headers:
	I1114 15:12:59.637952  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:12:59.637960  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:12:59.640932  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:12:59.640957  847956 round_trippers.go:577] Response Headers:
	I1114 15:12:59.640967  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:12:59.640976  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:12:59 GMT
	I1114 15:12:59.640984  847956 round_trippers.go:580]     Audit-Id: 023eebf7-12ee-42c4-810e-b7ae9a2cc11c
	I1114 15:12:59.640999  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:12:59.641009  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:12:59.641021  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:12:59.641268  847956 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"823","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15074 chars]
	I1114 15:12:59.641900  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:12:59.641927  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:12:59.641941  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:12:59.641949  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:12:59.641956  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:12:59.641963  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:12:59.641974  847956 node_conditions.go:105] duration metric: took 178.587286ms to run NodePressure ...
	I1114 15:12:59.641994  847956 start.go:228] waiting for startup goroutines ...
	I1114 15:12:59.642016  847956 start.go:233] waiting for cluster config update ...
	I1114 15:12:59.642027  847956 start.go:242] writing updated cluster config ...
	I1114 15:12:59.642594  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:12:59.642708  847956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:12:59.645520  847956 out.go:177] * Starting worker node multinode-627820-m02 in cluster multinode-627820
	I1114 15:12:59.647040  847956 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:12:59.647074  847956 cache.go:56] Caching tarball of preloaded images
	I1114 15:12:59.647202  847956 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:12:59.647219  847956 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:12:59.647332  847956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:12:59.647526  847956 start.go:365] acquiring machines lock for multinode-627820-m02: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:12:59.647574  847956 start.go:369] acquired machines lock for "multinode-627820-m02" in 27.511µs
	I1114 15:12:59.647586  847956 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:12:59.647598  847956 fix.go:54] fixHost starting: m02
	I1114 15:12:59.647849  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:12:59.647879  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:12:59.663001  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I1114 15:12:59.663517  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:12:59.664053  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:12:59.664083  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:12:59.664517  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:12:59.664840  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:12:59.665025  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetState
	I1114 15:12:59.666753  847956 fix.go:102] recreateIfNeeded on multinode-627820-m02: state=Running err=<nil>
	W1114 15:12:59.666775  847956 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:12:59.668798  847956 out.go:177] * Updating the running kvm2 "multinode-627820-m02" VM ...
	I1114 15:12:59.670274  847956 machine.go:88] provisioning docker machine ...
	I1114 15:12:59.670307  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:12:59.670518  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetMachineName
	I1114 15:12:59.670686  847956 buildroot.go:166] provisioning hostname "multinode-627820-m02"
	I1114 15:12:59.670708  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetMachineName
	I1114 15:12:59.670879  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:12:59.673395  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:12:59.673815  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:12:59.673844  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:12:59.674007  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:12:59.674183  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:12:59.674346  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:12:59.674557  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:12:59.674753  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:12:59.675243  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:12:59.675269  847956 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-627820-m02 && echo "multinode-627820-m02" | sudo tee /etc/hostname
	I1114 15:12:59.831745  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-627820-m02
	
	I1114 15:12:59.831786  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:12:59.835155  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:12:59.835577  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:12:59.835616  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:12:59.835787  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:12:59.836025  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:12:59.836192  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:12:59.836343  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:12:59.836549  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:12:59.836971  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:12:59.836992  847956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-627820-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-627820-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-627820-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:12:59.973967  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:12:59.974013  847956 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:12:59.974035  847956 buildroot.go:174] setting up certificates
	I1114 15:12:59.974055  847956 provision.go:83] configureAuth start
	I1114 15:12:59.974074  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetMachineName
	I1114 15:12:59.974394  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetIP
	I1114 15:12:59.977416  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:12:59.977835  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:12:59.977872  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:12:59.978039  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:12:59.980587  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:12:59.981002  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:12:59.981027  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:12:59.981253  847956 provision.go:138] copyHostCerts
	I1114 15:12:59.981289  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:12:59.981330  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:12:59.981343  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:12:59.981435  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:12:59.981557  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:12:59.981586  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:12:59.981594  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:12:59.981643  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:12:59.981714  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:12:59.981745  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:12:59.981753  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:12:59.981791  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:12:59.981865  847956 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.multinode-627820-m02 san=[192.168.39.38 192.168.39.38 localhost 127.0.0.1 minikube multinode-627820-m02]
	I1114 15:13:00.132322  847956 provision.go:172] copyRemoteCerts
	I1114 15:13:00.132389  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:13:00.132415  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:13:00.135343  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:13:00.135780  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:13:00.135812  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:13:00.135977  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:13:00.136207  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:13:00.136381  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:13:00.136513  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:13:00.229925  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 15:13:00.229998  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:13:00.254937  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 15:13:00.254997  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1114 15:13:00.278830  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 15:13:00.278910  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:13:00.301453  847956 provision.go:86] duration metric: configureAuth took 327.378017ms
	I1114 15:13:00.301490  847956 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:13:00.301816  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:13:00.301986  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:13:00.304923  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:13:00.305324  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:13:00.305452  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:13:00.305473  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:13:00.305700  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:13:00.305871  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:13:00.306021  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:13:00.306200  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:13:00.306525  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:13:00.306541  847956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:14:31.008778  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:14:31.008852  847956 machine.go:91] provisioned docker machine in 1m31.338533861s
	I1114 15:14:31.008870  847956 start.go:300] post-start starting for "multinode-627820-m02" (driver="kvm2")
	I1114 15:14:31.008891  847956 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:14:31.008938  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:14:31.009477  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:14:31.009521  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:14:31.012543  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.013059  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:14:31.013090  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.013267  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:14:31.013513  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:14:31.013692  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:14:31.013882  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:14:31.112630  847956 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:14:31.117071  847956 command_runner.go:130] > NAME=Buildroot
	I1114 15:14:31.117107  847956 command_runner.go:130] > VERSION=2021.02.12-1-g9cb9327-dirty
	I1114 15:14:31.117115  847956 command_runner.go:130] > ID=buildroot
	I1114 15:14:31.117134  847956 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 15:14:31.117142  847956 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 15:14:31.117327  847956 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:14:31.117381  847956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:14:31.117460  847956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:14:31.117549  847956 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:14:31.117563  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /etc/ssl/certs/8322112.pem
	I1114 15:14:31.117664  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:14:31.127177  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:14:31.150330  847956 start.go:303] post-start completed in 141.438041ms
	I1114 15:14:31.150358  847956 fix.go:56] fixHost completed within 1m31.502758529s
	I1114 15:14:31.150389  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:14:31.153271  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.153731  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:14:31.153780  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.153931  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:14:31.154191  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:14:31.154387  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:14:31.154568  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:14:31.154721  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:14:31.155157  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1114 15:14:31.155175  847956 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:14:31.285797  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699974871.273966630
	
	I1114 15:14:31.285830  847956 fix.go:206] guest clock: 1699974871.273966630
	I1114 15:14:31.285845  847956 fix.go:219] Guest: 2023-11-14 15:14:31.27396663 +0000 UTC Remote: 2023-11-14 15:14:31.150363018 +0000 UTC m=+455.664223568 (delta=123.603612ms)
	I1114 15:14:31.285873  847956 fix.go:190] guest clock delta is within tolerance: 123.603612ms
	I1114 15:14:31.285885  847956 start.go:83] releasing machines lock for "multinode-627820-m02", held for 1m31.638302149s
	I1114 15:14:31.285920  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:14:31.286266  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetIP
	I1114 15:14:31.289203  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.289598  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:14:31.289635  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.291654  847956 out.go:177] * Found network options:
	I1114 15:14:31.293138  847956 out.go:177]   - NO_PROXY=192.168.39.63
	W1114 15:14:31.294591  847956 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 15:14:31.294654  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:14:31.295321  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:14:31.295576  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:14:31.295710  847956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:14:31.295755  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	W1114 15:14:31.296103  847956 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 15:14:31.296199  847956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:14:31.296225  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:14:31.298885  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.299245  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:14:31.299276  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.299508  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:14:31.299508  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.299712  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:14:31.299906  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:14:31.299956  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:14:31.299982  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:31.300153  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:14:31.300257  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:14:31.300421  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:14:31.300593  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:14:31.300707  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:14:31.536336  847956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 15:14:31.536338  847956 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 15:14:31.542869  847956 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 15:14:31.542912  847956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:14:31.542970  847956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:14:31.552978  847956 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1114 15:14:31.553006  847956 start.go:472] detecting cgroup driver to use...
	I1114 15:14:31.553072  847956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:14:31.568249  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:14:31.581796  847956 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:14:31.581871  847956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:14:31.595722  847956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:14:31.610496  847956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:14:31.759107  847956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:14:31.914035  847956 docker.go:219] disabling docker service ...
	I1114 15:14:31.914117  847956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:14:31.931811  847956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:14:31.944591  847956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:14:32.069332  847956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:14:32.195691  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:14:32.207737  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:14:32.225480  847956 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1114 15:14:32.225822  847956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:14:32.225885  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:14:32.235211  847956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:14:32.235270  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:14:32.244264  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:14:32.253361  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:14:32.263470  847956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:14:32.272868  847956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:14:32.282647  847956 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1114 15:14:32.282809  847956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:14:32.293992  847956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:14:32.422200  847956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:14:32.653503  847956 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:14:32.653588  847956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:14:32.658757  847956 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1114 15:14:32.658788  847956 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 15:14:32.658796  847956 command_runner.go:130] > Device: 16h/22d	Inode: 1216        Links: 1
	I1114 15:14:32.658803  847956 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:14:32.658808  847956 command_runner.go:130] > Access: 2023-11-14 15:14:32.563072486 +0000
	I1114 15:14:32.658818  847956 command_runner.go:130] > Modify: 2023-11-14 15:14:32.563072486 +0000
	I1114 15:14:32.658824  847956 command_runner.go:130] > Change: 2023-11-14 15:14:32.563072486 +0000
	I1114 15:14:32.658830  847956 command_runner.go:130] >  Birth: -
	I1114 15:14:32.658921  847956 start.go:540] Will wait 60s for crictl version
	I1114 15:14:32.658983  847956 ssh_runner.go:195] Run: which crictl
	I1114 15:14:32.662951  847956 command_runner.go:130] > /usr/bin/crictl
	I1114 15:14:32.663151  847956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:14:32.706382  847956 command_runner.go:130] > Version:  0.1.0
	I1114 15:14:32.706620  847956 command_runner.go:130] > RuntimeName:  cri-o
	I1114 15:14:32.706635  847956 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1114 15:14:32.706640  847956 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 15:14:32.708429  847956 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:14:32.708511  847956 ssh_runner.go:195] Run: crio --version
	I1114 15:14:32.764548  847956 command_runner.go:130] > crio version 1.24.1
	I1114 15:14:32.764582  847956 command_runner.go:130] > Version:          1.24.1
	I1114 15:14:32.764592  847956 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:14:32.764598  847956 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:14:32.764607  847956 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:14:32.764614  847956 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:14:32.764619  847956 command_runner.go:130] > Compiler:         gc
	I1114 15:14:32.764625  847956 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:14:32.764633  847956 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:14:32.764643  847956 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:14:32.764650  847956 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:14:32.764658  847956 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:14:32.766294  847956 ssh_runner.go:195] Run: crio --version
	I1114 15:14:32.818291  847956 command_runner.go:130] > crio version 1.24.1
	I1114 15:14:32.818319  847956 command_runner.go:130] > Version:          1.24.1
	I1114 15:14:32.818328  847956 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:14:32.818336  847956 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:14:32.818358  847956 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:14:32.818366  847956 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:14:32.818373  847956 command_runner.go:130] > Compiler:         gc
	I1114 15:14:32.818380  847956 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:14:32.818388  847956 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:14:32.818401  847956 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:14:32.818412  847956 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:14:32.818420  847956 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:14:32.820683  847956 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:14:32.822493  847956 out.go:177]   - env NO_PROXY=192.168.39.63
	I1114 15:14:32.824033  847956 main.go:141] libmachine: (multinode-627820-m02) Calling .GetIP
	I1114 15:14:32.827317  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:32.827819  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:14:32.827850  847956 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:14:32.828098  847956 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:14:32.832704  847956 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1114 15:14:32.833054  847956 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820 for IP: 192.168.39.38
	I1114 15:14:32.833084  847956 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:14:32.833227  847956 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:14:32.833277  847956 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:14:32.833298  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 15:14:32.833317  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 15:14:32.833337  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 15:14:32.833355  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 15:14:32.833417  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:14:32.833463  847956 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:14:32.833480  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:14:32.833530  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:14:32.833572  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:14:32.833607  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:14:32.833665  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:14:32.833709  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:14:32.833730  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem -> /usr/share/ca-certificates/832211.pem
	I1114 15:14:32.833751  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /usr/share/ca-certificates/8322112.pem
	I1114 15:14:32.834216  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:14:32.858788  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:14:32.882595  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:14:32.904927  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:14:32.927790  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:14:32.950527  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:14:32.976899  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:14:33.003026  847956 ssh_runner.go:195] Run: openssl version
	I1114 15:14:33.009186  847956 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1114 15:14:33.009286  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:14:33.020550  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:14:33.025878  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:14:33.026105  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:14:33.026171  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:14:33.032132  847956 command_runner.go:130] > 51391683
	I1114 15:14:33.032221  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:14:33.041693  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:14:33.052131  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:14:33.056844  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:14:33.056964  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:14:33.057032  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:14:33.062447  847956 command_runner.go:130] > 3ec20f2e
	I1114 15:14:33.062614  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:14:33.072856  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:14:33.085598  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:14:33.090732  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:14:33.090757  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:14:33.090809  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:14:33.096544  847956 command_runner.go:130] > b5213941
	I1114 15:14:33.096809  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:14:33.106893  847956 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:14:33.111009  847956 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:14:33.111167  847956 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:14:33.111275  847956 ssh_runner.go:195] Run: crio config
	I1114 15:14:33.163793  847956 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1114 15:14:33.163836  847956 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1114 15:14:33.163845  847956 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1114 15:14:33.163850  847956 command_runner.go:130] > #
	I1114 15:14:33.163860  847956 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1114 15:14:33.163870  847956 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1114 15:14:33.163887  847956 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1114 15:14:33.163899  847956 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1114 15:14:33.163910  847956 command_runner.go:130] > # reload'.
	I1114 15:14:33.163919  847956 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1114 15:14:33.163925  847956 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1114 15:14:33.163932  847956 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1114 15:14:33.163942  847956 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1114 15:14:33.163950  847956 command_runner.go:130] > [crio]
	I1114 15:14:33.163962  847956 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1114 15:14:33.163973  847956 command_runner.go:130] > # containers images, in this directory.
	I1114 15:14:33.163984  847956 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1114 15:14:33.164001  847956 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1114 15:14:33.164012  847956 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1114 15:14:33.164019  847956 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1114 15:14:33.164026  847956 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1114 15:14:33.164033  847956 command_runner.go:130] > storage_driver = "overlay"
	I1114 15:14:33.164046  847956 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1114 15:14:33.164059  847956 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1114 15:14:33.164080  847956 command_runner.go:130] > storage_option = [
	I1114 15:14:33.164092  847956 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1114 15:14:33.164099  847956 command_runner.go:130] > ]
	I1114 15:14:33.164111  847956 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1114 15:14:33.164125  847956 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1114 15:14:33.164136  847956 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1114 15:14:33.164149  847956 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1114 15:14:33.164162  847956 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1114 15:14:33.164174  847956 command_runner.go:130] > # always happen on a node reboot
	I1114 15:14:33.164186  847956 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1114 15:14:33.164197  847956 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1114 15:14:33.164209  847956 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1114 15:14:33.164227  847956 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1114 15:14:33.164239  847956 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1114 15:14:33.164255  847956 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1114 15:14:33.164271  847956 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1114 15:14:33.164281  847956 command_runner.go:130] > # internal_wipe = true
	I1114 15:14:33.164290  847956 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1114 15:14:33.164305  847956 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1114 15:14:33.164318  847956 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1114 15:14:33.164331  847956 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1114 15:14:33.164345  847956 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1114 15:14:33.164355  847956 command_runner.go:130] > [crio.api]
	I1114 15:14:33.164364  847956 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1114 15:14:33.164377  847956 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1114 15:14:33.164386  847956 command_runner.go:130] > # IP address on which the stream server will listen.
	I1114 15:14:33.164423  847956 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1114 15:14:33.164442  847956 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1114 15:14:33.164451  847956 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1114 15:14:33.164474  847956 command_runner.go:130] > # stream_port = "0"
	I1114 15:14:33.164489  847956 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1114 15:14:33.164497  847956 command_runner.go:130] > # stream_enable_tls = false
	I1114 15:14:33.164511  847956 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1114 15:14:33.164521  847956 command_runner.go:130] > # stream_idle_timeout = ""
	I1114 15:14:33.164531  847956 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1114 15:14:33.164544  847956 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1114 15:14:33.164554  847956 command_runner.go:130] > # minutes.
	I1114 15:14:33.164562  847956 command_runner.go:130] > # stream_tls_cert = ""
	I1114 15:14:33.164572  847956 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1114 15:14:33.164586  847956 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1114 15:14:33.164593  847956 command_runner.go:130] > # stream_tls_key = ""
	I1114 15:14:33.164607  847956 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1114 15:14:33.164621  847956 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1114 15:14:33.164633  847956 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1114 15:14:33.164643  847956 command_runner.go:130] > # stream_tls_ca = ""
	I1114 15:14:33.164652  847956 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:14:33.164659  847956 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1114 15:14:33.164670  847956 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:14:33.164681  847956 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1114 15:14:33.164705  847956 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1114 15:14:33.164717  847956 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1114 15:14:33.164727  847956 command_runner.go:130] > [crio.runtime]
	I1114 15:14:33.164749  847956 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1114 15:14:33.164761  847956 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1114 15:14:33.164781  847956 command_runner.go:130] > # "nofile=1024:2048"
	I1114 15:14:33.164795  847956 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1114 15:14:33.164806  847956 command_runner.go:130] > # default_ulimits = [
	I1114 15:14:33.164812  847956 command_runner.go:130] > # ]
	I1114 15:14:33.164826  847956 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1114 15:14:33.164833  847956 command_runner.go:130] > # no_pivot = false
	I1114 15:14:33.164846  847956 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1114 15:14:33.164856  847956 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1114 15:14:33.164867  847956 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1114 15:14:33.164877  847956 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1114 15:14:33.164889  847956 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1114 15:14:33.164903  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:14:33.164912  847956 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1114 15:14:33.164919  847956 command_runner.go:130] > # Cgroup setting for conmon
	I1114 15:14:33.164930  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1114 15:14:33.164938  847956 command_runner.go:130] > conmon_cgroup = "pod"
	I1114 15:14:33.164953  847956 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1114 15:14:33.164965  847956 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1114 15:14:33.164979  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:14:33.164989  847956 command_runner.go:130] > conmon_env = [
	I1114 15:14:33.164999  847956 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1114 15:14:33.165008  847956 command_runner.go:130] > ]
	I1114 15:14:33.165017  847956 command_runner.go:130] > # Additional environment variables to set for all the
	I1114 15:14:33.165026  847956 command_runner.go:130] > # containers. These are overridden if set in the
	I1114 15:14:33.165040  847956 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1114 15:14:33.165050  847956 command_runner.go:130] > # default_env = [
	I1114 15:14:33.165057  847956 command_runner.go:130] > # ]
	I1114 15:14:33.165089  847956 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1114 15:14:33.165102  847956 command_runner.go:130] > # selinux = false
	I1114 15:14:33.165113  847956 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1114 15:14:33.165127  847956 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1114 15:14:33.165139  847956 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1114 15:14:33.165150  847956 command_runner.go:130] > # seccomp_profile = ""
	I1114 15:14:33.165162  847956 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1114 15:14:33.165175  847956 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1114 15:14:33.165190  847956 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1114 15:14:33.165204  847956 command_runner.go:130] > # which might increase security.
	I1114 15:14:33.165212  847956 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1114 15:14:33.165227  847956 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1114 15:14:33.165240  847956 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1114 15:14:33.165254  847956 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1114 15:14:33.165266  847956 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1114 15:14:33.165275  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:14:33.165287  847956 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1114 15:14:33.165300  847956 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1114 15:14:33.165308  847956 command_runner.go:130] > # the cgroup blockio controller.
	I1114 15:14:33.165319  847956 command_runner.go:130] > # blockio_config_file = ""
	I1114 15:14:33.165334  847956 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1114 15:14:33.165345  847956 command_runner.go:130] > # irqbalance daemon.
	I1114 15:14:33.165353  847956 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1114 15:14:33.165396  847956 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1114 15:14:33.165409  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:14:33.165417  847956 command_runner.go:130] > # rdt_config_file = ""
	I1114 15:14:33.165435  847956 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1114 15:14:33.165446  847956 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1114 15:14:33.165463  847956 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1114 15:14:33.165473  847956 command_runner.go:130] > # separate_pull_cgroup = ""
	I1114 15:14:33.165487  847956 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1114 15:14:33.165501  847956 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1114 15:14:33.165508  847956 command_runner.go:130] > # will be added.
	I1114 15:14:33.165519  847956 command_runner.go:130] > # default_capabilities = [
	I1114 15:14:33.165530  847956 command_runner.go:130] > # 	"CHOWN",
	I1114 15:14:33.165537  847956 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1114 15:14:33.165544  847956 command_runner.go:130] > # 	"FSETID",
	I1114 15:14:33.165554  847956 command_runner.go:130] > # 	"FOWNER",
	I1114 15:14:33.165561  847956 command_runner.go:130] > # 	"SETGID",
	I1114 15:14:33.165568  847956 command_runner.go:130] > # 	"SETUID",
	I1114 15:14:33.165577  847956 command_runner.go:130] > # 	"SETPCAP",
	I1114 15:14:33.165585  847956 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1114 15:14:33.165596  847956 command_runner.go:130] > # 	"KILL",
	I1114 15:14:33.165602  847956 command_runner.go:130] > # ]
	I1114 15:14:33.165614  847956 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1114 15:14:33.165629  847956 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:14:33.165638  847956 command_runner.go:130] > # default_sysctls = [
	I1114 15:14:33.165648  847956 command_runner.go:130] > # ]
	I1114 15:14:33.165657  847956 command_runner.go:130] > # List of devices on the host that a
	I1114 15:14:33.165671  847956 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1114 15:14:33.165681  847956 command_runner.go:130] > # allowed_devices = [
	I1114 15:14:33.165688  847956 command_runner.go:130] > # 	"/dev/fuse",
	I1114 15:14:33.165698  847956 command_runner.go:130] > # ]
	I1114 15:14:33.165707  847956 command_runner.go:130] > # List of additional devices. specified as
	I1114 15:14:33.165721  847956 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1114 15:14:33.165734  847956 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1114 15:14:33.165788  847956 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:14:33.165800  847956 command_runner.go:130] > # additional_devices = [
	I1114 15:14:33.165806  847956 command_runner.go:130] > # ]
	I1114 15:14:33.165819  847956 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1114 15:14:33.165827  847956 command_runner.go:130] > # cdi_spec_dirs = [
	I1114 15:14:33.165837  847956 command_runner.go:130] > # 	"/etc/cdi",
	I1114 15:14:33.165843  847956 command_runner.go:130] > # 	"/var/run/cdi",
	I1114 15:14:33.165853  847956 command_runner.go:130] > # ]
	I1114 15:14:33.165863  847956 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1114 15:14:33.165877  847956 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1114 15:14:33.165886  847956 command_runner.go:130] > # Defaults to false.
	I1114 15:14:33.165896  847956 command_runner.go:130] > # device_ownership_from_security_context = false
	I1114 15:14:33.165910  847956 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1114 15:14:33.165925  847956 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1114 15:14:33.165935  847956 command_runner.go:130] > # hooks_dir = [
	I1114 15:14:33.165943  847956 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1114 15:14:33.165951  847956 command_runner.go:130] > # ]
	I1114 15:14:33.165961  847956 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1114 15:14:33.165975  847956 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1114 15:14:33.165987  847956 command_runner.go:130] > # its default mounts from the following two files:
	I1114 15:14:33.165996  847956 command_runner.go:130] > #
	I1114 15:14:33.166008  847956 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1114 15:14:33.166023  847956 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1114 15:14:33.166037  847956 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1114 15:14:33.166043  847956 command_runner.go:130] > #
	I1114 15:14:33.166058  847956 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1114 15:14:33.166069  847956 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1114 15:14:33.166079  847956 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1114 15:14:33.166085  847956 command_runner.go:130] > #      only add mounts it finds in this file.
	I1114 15:14:33.166091  847956 command_runner.go:130] > #
	I1114 15:14:33.166096  847956 command_runner.go:130] > # default_mounts_file = ""
	I1114 15:14:33.166102  847956 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1114 15:14:33.166110  847956 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1114 15:14:33.166119  847956 command_runner.go:130] > pids_limit = 1024
	I1114 15:14:33.166129  847956 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1114 15:14:33.166142  847956 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1114 15:14:33.166154  847956 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1114 15:14:33.166171  847956 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1114 15:14:33.166181  847956 command_runner.go:130] > # log_size_max = -1
	I1114 15:14:33.166195  847956 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1114 15:14:33.166205  847956 command_runner.go:130] > # log_to_journald = false
	I1114 15:14:33.166212  847956 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1114 15:14:33.166221  847956 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1114 15:14:33.166230  847956 command_runner.go:130] > # Path to directory for container attach sockets.
	I1114 15:14:33.166243  847956 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1114 15:14:33.166254  847956 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1114 15:14:33.166264  847956 command_runner.go:130] > # bind_mount_prefix = ""
	I1114 15:14:33.166277  847956 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1114 15:14:33.166287  847956 command_runner.go:130] > # read_only = false
	I1114 15:14:33.166298  847956 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1114 15:14:33.166309  847956 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1114 15:14:33.166314  847956 command_runner.go:130] > # live configuration reload.
	I1114 15:14:33.166320  847956 command_runner.go:130] > # log_level = "info"
	I1114 15:14:33.166332  847956 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1114 15:14:33.166345  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:14:33.166352  847956 command_runner.go:130] > # log_filter = ""
	I1114 15:14:33.166365  847956 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1114 15:14:33.166379  847956 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1114 15:14:33.166412  847956 command_runner.go:130] > # separated by comma.
	I1114 15:14:33.166423  847956 command_runner.go:130] > # uid_mappings = ""
	I1114 15:14:33.166437  847956 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1114 15:14:33.166460  847956 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1114 15:14:33.166470  847956 command_runner.go:130] > # separated by comma.
	I1114 15:14:33.166478  847956 command_runner.go:130] > # gid_mappings = ""
	I1114 15:14:33.166491  847956 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1114 15:14:33.166504  847956 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:14:33.166513  847956 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:14:33.166523  847956 command_runner.go:130] > # minimum_mappable_uid = -1
	I1114 15:14:33.166536  847956 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1114 15:14:33.166551  847956 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:14:33.166561  847956 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:14:33.166572  847956 command_runner.go:130] > # minimum_mappable_gid = -1
	I1114 15:14:33.166584  847956 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1114 15:14:33.166597  847956 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1114 15:14:33.166607  847956 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1114 15:14:33.166613  847956 command_runner.go:130] > # ctr_stop_timeout = 30
	I1114 15:14:33.166627  847956 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1114 15:14:33.166639  847956 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1114 15:14:33.166647  847956 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1114 15:14:33.166662  847956 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1114 15:14:33.166673  847956 command_runner.go:130] > drop_infra_ctr = false
	I1114 15:14:33.166685  847956 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1114 15:14:33.166698  847956 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1114 15:14:33.166710  847956 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1114 15:14:33.166717  847956 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1114 15:14:33.166728  847956 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1114 15:14:33.166740  847956 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1114 15:14:33.166749  847956 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1114 15:14:33.166763  847956 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1114 15:14:33.166774  847956 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1114 15:14:33.166788  847956 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1114 15:14:33.166804  847956 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1114 15:14:33.166813  847956 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1114 15:14:33.166819  847956 command_runner.go:130] > # default_runtime = "runc"
	I1114 15:14:33.166832  847956 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1114 15:14:33.166847  847956 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1114 15:14:33.166865  847956 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1114 15:14:33.166880  847956 command_runner.go:130] > # creation as a file is not desired either.
	I1114 15:14:33.166895  847956 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1114 15:14:33.166904  847956 command_runner.go:130] > # the hostname is being managed dynamically.
	I1114 15:14:33.166912  847956 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1114 15:14:33.166921  847956 command_runner.go:130] > # ]
	I1114 15:14:33.166933  847956 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1114 15:14:33.166947  847956 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1114 15:14:33.166960  847956 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1114 15:14:33.166974  847956 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1114 15:14:33.166983  847956 command_runner.go:130] > #
	I1114 15:14:33.166990  847956 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1114 15:14:33.166997  847956 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1114 15:14:33.167004  847956 command_runner.go:130] > #  runtime_type = "oci"
	I1114 15:14:33.167015  847956 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1114 15:14:33.167027  847956 command_runner.go:130] > #  privileged_without_host_devices = false
	I1114 15:14:33.167037  847956 command_runner.go:130] > #  allowed_annotations = []
	I1114 15:14:33.167047  847956 command_runner.go:130] > # Where:
	I1114 15:14:33.167056  847956 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1114 15:14:33.167073  847956 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1114 15:14:33.167083  847956 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1114 15:14:33.167094  847956 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1114 15:14:33.167105  847956 command_runner.go:130] > #   in $PATH.
	I1114 15:14:33.167116  847956 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1114 15:14:33.167128  847956 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1114 15:14:33.167142  847956 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1114 15:14:33.167151  847956 command_runner.go:130] > #   state.
	I1114 15:14:33.167164  847956 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1114 15:14:33.167175  847956 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1114 15:14:33.167181  847956 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1114 15:14:33.167194  847956 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1114 15:14:33.167208  847956 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1114 15:14:33.167243  847956 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1114 15:14:33.167255  847956 command_runner.go:130] > #   The currently recognized values are:
	I1114 15:14:33.167269  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1114 15:14:33.167284  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1114 15:14:33.167298  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1114 15:14:33.167314  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1114 15:14:33.167330  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1114 15:14:33.167344  847956 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1114 15:14:33.167358  847956 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1114 15:14:33.167369  847956 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1114 15:14:33.167377  847956 command_runner.go:130] > #   should be moved to the container's cgroup
	I1114 15:14:33.167388  847956 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1114 15:14:33.167399  847956 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1114 15:14:33.167407  847956 command_runner.go:130] > runtime_type = "oci"
	I1114 15:14:33.167418  847956 command_runner.go:130] > runtime_root = "/run/runc"
	I1114 15:14:33.167428  847956 command_runner.go:130] > runtime_config_path = ""
	I1114 15:14:33.167435  847956 command_runner.go:130] > monitor_path = ""
	I1114 15:14:33.167448  847956 command_runner.go:130] > monitor_cgroup = ""
	I1114 15:14:33.167462  847956 command_runner.go:130] > monitor_exec_cgroup = ""
	I1114 15:14:33.167472  847956 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1114 15:14:33.167482  847956 command_runner.go:130] > # running containers
	I1114 15:14:33.167490  847956 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1114 15:14:33.167504  847956 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1114 15:14:33.167569  847956 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1114 15:14:33.167585  847956 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1114 15:14:33.167594  847956 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1114 15:14:33.167602  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1114 15:14:33.167610  847956 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1114 15:14:33.167617  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1114 15:14:33.167626  847956 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1114 15:14:33.167633  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1114 15:14:33.167645  847956 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1114 15:14:33.167655  847956 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1114 15:14:33.167665  847956 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1114 15:14:33.167680  847956 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1114 15:14:33.167704  847956 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1114 15:14:33.167718  847956 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1114 15:14:33.167735  847956 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1114 15:14:33.167751  847956 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1114 15:14:33.167764  847956 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1114 15:14:33.167779  847956 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1114 15:14:33.167792  847956 command_runner.go:130] > # Example:
	I1114 15:14:33.167802  847956 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1114 15:14:33.167811  847956 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1114 15:14:33.167822  847956 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1114 15:14:33.167831  847956 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1114 15:14:33.167841  847956 command_runner.go:130] > # cpuset = 0
	I1114 15:14:33.167848  847956 command_runner.go:130] > # cpushares = "0-1"
	I1114 15:14:33.167857  847956 command_runner.go:130] > # Where:
	I1114 15:14:33.167866  847956 command_runner.go:130] > # The workload name is workload-type.
	I1114 15:14:33.167881  847956 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1114 15:14:33.167893  847956 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1114 15:14:33.167902  847956 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1114 15:14:33.167918  847956 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1114 15:14:33.167931  847956 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1114 15:14:33.167938  847956 command_runner.go:130] > # 
	I1114 15:14:33.167948  847956 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1114 15:14:33.167956  847956 command_runner.go:130] > #
	I1114 15:14:33.167966  847956 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1114 15:14:33.167982  847956 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1114 15:14:33.167992  847956 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1114 15:14:33.168004  847956 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1114 15:14:33.168017  847956 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1114 15:14:33.168027  847956 command_runner.go:130] > [crio.image]
	I1114 15:14:33.168037  847956 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1114 15:14:33.168048  847956 command_runner.go:130] > # default_transport = "docker://"
	I1114 15:14:33.168057  847956 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1114 15:14:33.168071  847956 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:14:33.168081  847956 command_runner.go:130] > # global_auth_file = ""
	I1114 15:14:33.168091  847956 command_runner.go:130] > # The image used to instantiate infra containers.
	I1114 15:14:33.168109  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:14:33.168120  847956 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1114 15:14:33.168134  847956 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1114 15:14:33.168145  847956 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:14:33.168150  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:14:33.168157  847956 command_runner.go:130] > # pause_image_auth_file = ""
	I1114 15:14:33.168163  847956 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1114 15:14:33.168177  847956 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1114 15:14:33.168191  847956 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1114 15:14:33.168202  847956 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1114 15:14:33.168213  847956 command_runner.go:130] > # pause_command = "/pause"
	I1114 15:14:33.168223  847956 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1114 15:14:33.168236  847956 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1114 15:14:33.168248  847956 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1114 15:14:33.168257  847956 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1114 15:14:33.168262  847956 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1114 15:14:33.168267  847956 command_runner.go:130] > # signature_policy = ""
	I1114 15:14:33.168273  847956 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1114 15:14:33.168282  847956 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1114 15:14:33.168286  847956 command_runner.go:130] > # changing them here.
	I1114 15:14:33.168292  847956 command_runner.go:130] > # insecure_registries = [
	I1114 15:14:33.168296  847956 command_runner.go:130] > # ]
	I1114 15:14:33.168302  847956 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1114 15:14:33.168310  847956 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1114 15:14:33.168314  847956 command_runner.go:130] > # image_volumes = "mkdir"
	I1114 15:14:33.168327  847956 command_runner.go:130] > # Temporary directory to use for storing big files
	I1114 15:14:33.168337  847956 command_runner.go:130] > # big_files_temporary_dir = ""
	I1114 15:14:33.168351  847956 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1114 15:14:33.168358  847956 command_runner.go:130] > # CNI plugins.
	I1114 15:14:33.168365  847956 command_runner.go:130] > [crio.network]
	I1114 15:14:33.168375  847956 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1114 15:14:33.168387  847956 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1114 15:14:33.168396  847956 command_runner.go:130] > # cni_default_network = ""
	I1114 15:14:33.168406  847956 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1114 15:14:33.168419  847956 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1114 15:14:33.168428  847956 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1114 15:14:33.168435  847956 command_runner.go:130] > # plugin_dirs = [
	I1114 15:14:33.168443  847956 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1114 15:14:33.168447  847956 command_runner.go:130] > # ]
	I1114 15:14:33.168460  847956 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1114 15:14:33.168466  847956 command_runner.go:130] > [crio.metrics]
	I1114 15:14:33.168472  847956 command_runner.go:130] > # Globally enable or disable metrics support.
	I1114 15:14:33.168478  847956 command_runner.go:130] > enable_metrics = true
	I1114 15:14:33.168486  847956 command_runner.go:130] > # Specify enabled metrics collectors.
	I1114 15:14:33.168494  847956 command_runner.go:130] > # Per default all metrics are enabled.
	I1114 15:14:33.168500  847956 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1114 15:14:33.168508  847956 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1114 15:14:33.168517  847956 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1114 15:14:33.168521  847956 command_runner.go:130] > # metrics_collectors = [
	I1114 15:14:33.168527  847956 command_runner.go:130] > # 	"operations",
	I1114 15:14:33.168532  847956 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1114 15:14:33.168539  847956 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1114 15:14:33.168544  847956 command_runner.go:130] > # 	"operations_errors",
	I1114 15:14:33.168553  847956 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1114 15:14:33.168563  847956 command_runner.go:130] > # 	"image_pulls_by_name",
	I1114 15:14:33.168575  847956 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1114 15:14:33.168586  847956 command_runner.go:130] > # 	"image_pulls_failures",
	I1114 15:14:33.168596  847956 command_runner.go:130] > # 	"image_pulls_successes",
	I1114 15:14:33.168604  847956 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1114 15:14:33.168614  847956 command_runner.go:130] > # 	"image_layer_reuse",
	I1114 15:14:33.168624  847956 command_runner.go:130] > # 	"containers_oom_total",
	I1114 15:14:33.168634  847956 command_runner.go:130] > # 	"containers_oom",
	I1114 15:14:33.168638  847956 command_runner.go:130] > # 	"processes_defunct",
	I1114 15:14:33.168642  847956 command_runner.go:130] > # 	"operations_total",
	I1114 15:14:33.168649  847956 command_runner.go:130] > # 	"operations_latency_seconds",
	I1114 15:14:33.168653  847956 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1114 15:14:33.168660  847956 command_runner.go:130] > # 	"operations_errors_total",
	I1114 15:14:33.168665  847956 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1114 15:14:33.168671  847956 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1114 15:14:33.168676  847956 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1114 15:14:33.168683  847956 command_runner.go:130] > # 	"image_pulls_success_total",
	I1114 15:14:33.168687  847956 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1114 15:14:33.168692  847956 command_runner.go:130] > # 	"containers_oom_count_total",
	I1114 15:14:33.168695  847956 command_runner.go:130] > # ]
	I1114 15:14:33.168700  847956 command_runner.go:130] > # The port on which the metrics server will listen.
	I1114 15:14:33.168707  847956 command_runner.go:130] > # metrics_port = 9090
	I1114 15:14:33.168712  847956 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1114 15:14:33.168719  847956 command_runner.go:130] > # metrics_socket = ""
	I1114 15:14:33.168724  847956 command_runner.go:130] > # The certificate for the secure metrics server.
	I1114 15:14:33.168735  847956 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1114 15:14:33.168765  847956 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1114 15:14:33.168775  847956 command_runner.go:130] > # certificate on any modification event.
	I1114 15:14:33.168782  847956 command_runner.go:130] > # metrics_cert = ""
	I1114 15:14:33.168791  847956 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1114 15:14:33.168800  847956 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1114 15:14:33.168807  847956 command_runner.go:130] > # metrics_key = ""
	I1114 15:14:33.168817  847956 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1114 15:14:33.168825  847956 command_runner.go:130] > [crio.tracing]
	I1114 15:14:33.168830  847956 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1114 15:14:33.168837  847956 command_runner.go:130] > # enable_tracing = false
	I1114 15:14:33.168842  847956 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1114 15:14:33.168849  847956 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1114 15:14:33.168854  847956 command_runner.go:130] > # Number of samples to collect per million spans.
	I1114 15:14:33.168862  847956 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1114 15:14:33.168869  847956 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1114 15:14:33.168879  847956 command_runner.go:130] > [crio.stats]
	I1114 15:14:33.168890  847956 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1114 15:14:33.168898  847956 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1114 15:14:33.168903  847956 command_runner.go:130] > # stats_collection_period = 0
	I1114 15:14:33.168949  847956 command_runner.go:130] ! time="2023-11-14 15:14:33.148031955Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1114 15:14:33.168970  847956 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1114 15:14:33.169056  847956 cni.go:84] Creating CNI manager for ""
	I1114 15:14:33.169066  847956 cni.go:136] 3 nodes found, recommending kindnet
	I1114 15:14:33.169098  847956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:14:33.169120  847956 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-627820 NodeName:multinode-627820-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:14:33.169242  847956 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-627820-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:14:33.169297  847956 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-627820-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:14:33.169348  847956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:14:33.180430  847956 command_runner.go:130] > kubeadm
	I1114 15:14:33.180456  847956 command_runner.go:130] > kubectl
	I1114 15:14:33.180463  847956 command_runner.go:130] > kubelet
	I1114 15:14:33.180528  847956 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:14:33.180605  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1114 15:14:33.190195  847956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1114 15:14:33.206330  847956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:14:33.224384  847956 ssh_runner.go:195] Run: grep 192.168.39.63	control-plane.minikube.internal$ /etc/hosts
	I1114 15:14:33.228562  847956 command_runner.go:130] > 192.168.39.63	control-plane.minikube.internal
	I1114 15:14:33.228637  847956 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:14:33.229028  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:14:33.229075  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:14:33.229145  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:14:33.244314  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I1114 15:14:33.244798  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:14:33.245351  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:14:33.245375  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:14:33.245748  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:14:33.245956  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:14:33.246139  847956 start.go:304] JoinCluster: &{Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:14:33.246263  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1114 15:14:33.246291  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:14:33.249612  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:14:33.250084  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:14:33.250116  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:14:33.250272  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:14:33.250521  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:14:33.250768  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:14:33.250897  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:14:33.437729  847956 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token po8jr4.farmnkq76yn03e94 --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:14:33.437987  847956 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 15:14:33.438033  847956 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:14:33.438347  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:14:33.438406  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:14:33.454263  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I1114 15:14:33.454821  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:14:33.455399  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:14:33.455434  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:14:33.455857  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:14:33.456103  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:14:33.456372  847956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-627820-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1114 15:14:33.456401  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:14:33.459336  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:14:33.459876  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:14:33.459912  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:14:33.460074  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:14:33.460278  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:14:33.460439  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:14:33.460679  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:14:33.676136  847956 command_runner.go:130] > node/multinode-627820-m02 cordoned
	I1114 15:14:36.727200  847956 command_runner.go:130] > pod "busybox-5bc68d56bd-rxmbm" has DeletionTimestamp older than 1 seconds, skipping
	I1114 15:14:36.727231  847956 command_runner.go:130] > node/multinode-627820-m02 drained
	I1114 15:14:36.729156  847956 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1114 15:14:36.729176  847956 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-2d26z, kube-system/kube-proxy-6xg9v
	I1114 15:14:36.729203  847956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-627820-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.272802561s)
	I1114 15:14:36.729221  847956 node.go:108] successfully drained node "m02"
	I1114 15:14:36.729614  847956 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:14:36.729834  847956 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:14:36.730229  847956 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1114 15:14:36.730288  847956 round_trippers.go:463] DELETE https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:14:36.730296  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:36.730304  847956 round_trippers.go:473]     Content-Type: application/json
	I1114 15:14:36.730309  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:36.730315  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:36.748253  847956 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1114 15:14:36.748283  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:36.748293  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:36.748301  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:36.748310  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:36.748323  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:36.748332  847956 round_trippers.go:580]     Content-Length: 171
	I1114 15:14:36.748340  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:36 GMT
	I1114 15:14:36.748348  847956 round_trippers.go:580]     Audit-Id: 7c4bb35c-c129-49b0-b2ce-bdf2aac8ae77
	I1114 15:14:36.748457  847956 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-627820-m02","kind":"nodes","uid":"744755ad-0aac-4230-b688-92b3600f60d7"}}
	I1114 15:14:36.748510  847956 node.go:124] successfully deleted node "m02"
	I1114 15:14:36.748537  847956 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 15:14:36.748564  847956 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 15:14:36.748602  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token po8jr4.farmnkq76yn03e94 --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-627820-m02"
	I1114 15:14:36.806257  847956 command_runner.go:130] > [preflight] Running pre-flight checks
	I1114 15:14:36.978646  847956 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1114 15:14:36.978677  847956 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1114 15:14:37.038107  847956 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:14:37.038145  847956 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:14:37.038151  847956 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 15:14:37.201318  847956 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1114 15:14:37.725761  847956 command_runner.go:130] > This node has joined the cluster:
	I1114 15:14:37.725793  847956 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1114 15:14:37.725800  847956 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1114 15:14:37.725807  847956 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1114 15:14:37.728426  847956 command_runner.go:130] ! W1114 15:14:36.794262    2601 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1114 15:14:37.728486  847956 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1114 15:14:37.728504  847956 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1114 15:14:37.728520  847956 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1114 15:14:37.728553  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1114 15:14:38.026050  847956 start.go:306] JoinCluster complete in 4.77990826s
	I1114 15:14:38.026088  847956 cni.go:84] Creating CNI manager for ""
	I1114 15:14:38.026096  847956 cni.go:136] 3 nodes found, recommending kindnet
	I1114 15:14:38.026170  847956 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 15:14:38.032418  847956 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 15:14:38.032448  847956 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1114 15:14:38.032457  847956 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1114 15:14:38.032466  847956 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:14:38.032475  847956 command_runner.go:130] > Access: 2023-11-14 15:12:06.839117816 +0000
	I1114 15:14:38.032483  847956 command_runner.go:130] > Modify: 2023-11-09 04:45:09.000000000 +0000
	I1114 15:14:38.032491  847956 command_runner.go:130] > Change: 2023-11-14 15:12:04.750117816 +0000
	I1114 15:14:38.032497  847956 command_runner.go:130] >  Birth: -
	I1114 15:14:38.032556  847956 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 15:14:38.032569  847956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 15:14:38.050636  847956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 15:14:38.420103  847956 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1114 15:14:38.428128  847956 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1114 15:14:38.432302  847956 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1114 15:14:38.448366  847956 command_runner.go:130] > daemonset.apps/kindnet configured
	I1114 15:14:38.451946  847956 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:14:38.452239  847956 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:14:38.452603  847956 round_trippers.go:463] GET https://192.168.39.63:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 15:14:38.452627  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.452635  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.452640  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.454997  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:38.455017  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.455026  847956 round_trippers.go:580]     Audit-Id: 31b7b3d9-c7f2-45f2-8dc3-c2af05a8758a
	I1114 15:14:38.455034  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.455041  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.455049  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.455057  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.455075  847956 round_trippers.go:580]     Content-Length: 291
	I1114 15:14:38.455090  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.455129  847956 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"57bccca2-f0e4-486c-b5a0-3985938d2dae","resourceVersion":"855","creationTimestamp":"2023-11-14T15:02:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1114 15:14:38.455226  847956 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-627820" context rescaled to 1 replicas
	I1114 15:14:38.455261  847956 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 15:14:38.457913  847956 out.go:177] * Verifying Kubernetes components...
	I1114 15:14:38.459771  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:14:38.473989  847956 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:14:38.474238  847956 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:14:38.474529  847956 node_ready.go:35] waiting up to 6m0s for node "multinode-627820-m02" to be "Ready" ...
	I1114 15:14:38.474622  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:14:38.474635  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.474646  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.474658  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.477267  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:38.477290  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.477298  847956 round_trippers.go:580]     Audit-Id: c9a68c8a-cd81-4719-8c8a-af8da40e0c06
	I1114 15:14:38.477305  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.477312  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.477320  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.477327  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.477335  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.477921  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"5d9328d2-a334-4c14-8c25-db8d2fa4e56c","resourceVersion":"1003","creationTimestamp":"2023-11-14T15:14:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:14:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:14:37Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1114 15:14:38.478205  847956 node_ready.go:49] node "multinode-627820-m02" has status "Ready":"True"
	I1114 15:14:38.478219  847956 node_ready.go:38] duration metric: took 3.667089ms waiting for node "multinode-627820-m02" to be "Ready" ...
	I1114 15:14:38.478226  847956 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:14:38.478282  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:14:38.478290  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.478298  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.478304  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.486866  847956 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1114 15:14:38.486887  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.486893  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.486899  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.486903  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.486908  847956 round_trippers.go:580]     Audit-Id: ded9988f-2d3b-4e8b-bbfc-677e7c969f72
	I1114 15:14:38.486913  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.486920  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.487989  847956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1010"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"851","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82195 chars]
	I1114 15:14:38.490373  847956 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.490443  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:14:38.490451  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.490458  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.490464  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.492981  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:38.493002  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.493010  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.493015  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.493020  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.493037  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.493051  847956 round_trippers.go:580]     Audit-Id: 41a82e15-cde2-4cc0-be1f-75d91dbf0c0b
	I1114 15:14:38.493065  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.493165  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"851","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1114 15:14:38.493740  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:14:38.493761  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.493773  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.493785  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.497034  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:14:38.497066  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.497076  847956 round_trippers.go:580]     Audit-Id: 83e2093e-d5f9-4ca8-bc67-681d64987032
	I1114 15:14:38.497084  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.497092  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.497102  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.497114  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.497122  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.497333  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:14:38.497744  847956 pod_ready.go:92] pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace has status "Ready":"True"
	I1114 15:14:38.497765  847956 pod_ready.go:81] duration metric: took 7.370052ms waiting for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.497777  847956 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.497846  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-627820
	I1114 15:14:38.497857  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.497868  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.497881  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.499971  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:38.499993  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.500003  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.500012  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.500019  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.500027  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.500035  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.500045  847956 round_trippers.go:580]     Audit-Id: c2e21624-1f22-4620-9f30-78e61ad0e4aa
	I1114 15:14:38.500257  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-627820","namespace":"kube-system","uid":"f7ab1cba-820a-4cad-8607-dcf55b587b77","resourceVersion":"817","creationTimestamp":"2023-11-14T15:02:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.63:2379","kubernetes.io/config.hash":"9e94d5d69871d944e272883491976489","kubernetes.io/config.mirror":"9e94d5d69871d944e272883491976489","kubernetes.io/config.seen":"2023-11-14T15:02:10.404956486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1114 15:14:38.500700  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:14:38.500720  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.500730  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.500748  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.502939  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:38.502957  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.502966  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.502975  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.502982  847956 round_trippers.go:580]     Audit-Id: f271c9a4-e572-4cfd-8aae-6b3f5b38aae7
	I1114 15:14:38.502990  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.503000  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.503008  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.503209  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:14:38.503515  847956 pod_ready.go:92] pod "etcd-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:14:38.503533  847956 pod_ready.go:81] duration metric: took 5.743964ms waiting for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.503554  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.503607  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-627820
	I1114 15:14:38.503617  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.503627  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.503639  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.505385  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:14:38.505405  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.505414  847956 round_trippers.go:580]     Audit-Id: 1efcda8c-7040-4b15-9780-fa3b7bc30ba9
	I1114 15:14:38.505421  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.505428  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.505438  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.505449  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.505477  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.505699  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-627820","namespace":"kube-system","uid":"8a9b9224-3446-46f7-b525-e1f32bb9a33c","resourceVersion":"826","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.63:8443","kubernetes.io/config.hash":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.mirror":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.seen":"2023-11-14T15:02:19.515752674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1114 15:14:38.506085  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:14:38.506101  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.506111  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.506119  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.508082  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:14:38.508102  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.508111  847956 round_trippers.go:580]     Audit-Id: bf857130-ab06-4d66-b921-7da5c258d604
	I1114 15:14:38.508119  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.508127  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.508139  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.508147  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.508155  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.508317  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:14:38.508622  847956 pod_ready.go:92] pod "kube-apiserver-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:14:38.508638  847956 pod_ready.go:81] duration metric: took 5.07569ms waiting for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.508651  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.508699  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-627820
	I1114 15:14:38.508709  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.508719  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.508729  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.510796  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:38.510819  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.510829  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.510838  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.510846  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.510856  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.510875  847956 round_trippers.go:580]     Audit-Id: fed58fef-d645-468d-bf76-0db7c7f4e4cc
	I1114 15:14:38.510885  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.511044  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-627820","namespace":"kube-system","uid":"b4440d06-27f9-4455-ae59-2d8c744b99a2","resourceVersion":"816","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.mirror":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.seen":"2023-11-14T15:02:19.515747223Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1114 15:14:38.511532  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:14:38.511554  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.511565  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.511573  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.513440  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:14:38.513459  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.513469  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.513476  847956 round_trippers.go:580]     Audit-Id: 6b1305ce-bff2-4406-8c5e-72997c2e94b1
	I1114 15:14:38.513484  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.513493  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.513502  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.513510  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.513677  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:14:38.514065  847956 pod_ready.go:92] pod "kube-controller-manager-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:14:38.514084  847956 pod_ready.go:81] duration metric: took 5.425932ms waiting for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.514093  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4hf2k" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.675472  847956 request.go:629] Waited for 161.283322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4hf2k
	I1114 15:14:38.675546  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4hf2k
	I1114 15:14:38.675553  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.675630  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.675652  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.680949  847956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 15:14:38.680990  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.681002  847956 round_trippers.go:580]     Audit-Id: d94f0e9d-0208-44ef-a9ce-5bcbd79b540c
	I1114 15:14:38.681011  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.681020  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.681030  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.681037  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.681045  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.681211  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4hf2k","generateName":"kube-proxy-","namespace":"kube-system","uid":"205bb9ac-4540-41d6-adb8-078c02d91b4e","resourceVersion":"672","creationTimestamp":"2023-11-14T15:04:00Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1114 15:14:38.874800  847956 request.go:629] Waited for 193.018451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:14:38.874922  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:14:38.874930  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:38.874946  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:38.874960  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:38.877559  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:38.877585  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:38.877596  847956 round_trippers.go:580]     Audit-Id: 1d0c141f-3c15-4c1b-b37a-01934dcd4bb8
	I1114 15:14:38.877603  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:38.877610  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:38.877617  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:38.877625  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:38.877646  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:38 GMT
	I1114 15:14:38.877863  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m03","uid":"019405fb-baac-496b-96ae-131218281f18","resourceVersion":"830","creationTimestamp":"2023-11-14T15:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1114 15:14:38.878250  847956 pod_ready.go:92] pod "kube-proxy-4hf2k" in "kube-system" namespace has status "Ready":"True"
	I1114 15:14:38.878277  847956 pod_ready.go:81] duration metric: took 364.172288ms waiting for pod "kube-proxy-4hf2k" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:38.878291  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:39.074691  847956 request.go:629] Waited for 196.294023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:14:39.074774  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:14:39.074782  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:39.074794  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:39.074803  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:39.078335  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:14:39.078366  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:39.078378  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:39 GMT
	I1114 15:14:39.078387  847956 round_trippers.go:580]     Audit-Id: 70324307-8f24-4889-b6fa-98d9228b4cc1
	I1114 15:14:39.078394  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:39.078403  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:39.078411  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:39.078419  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:39.078572  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6xg9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"2304a457-3a85-4791-8d18-4e1262db399f","resourceVersion":"1006","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5882 chars]
	I1114 15:14:39.275661  847956 request.go:629] Waited for 196.397522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:14:39.275758  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:14:39.275766  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:39.275778  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:39.275796  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:39.278938  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:14:39.278959  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:39.278966  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:39.278971  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:39.278976  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:39.278983  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:39 GMT
	I1114 15:14:39.278988  847956 round_trippers.go:580]     Audit-Id: a95593f7-6c34-482d-b503-f2cac347150a
	I1114 15:14:39.278993  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:39.279147  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"5d9328d2-a334-4c14-8c25-db8d2fa4e56c","resourceVersion":"1003","creationTimestamp":"2023-11-14T15:14:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:14:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:14:37Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1114 15:14:39.474775  847956 request.go:629] Waited for 195.301394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:14:39.474881  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:14:39.474908  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:39.474918  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:39.474927  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:39.477553  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:39.477574  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:39.477581  847956 round_trippers.go:580]     Audit-Id: 7c8f8bf3-c899-4851-8039-f2cd476d4814
	I1114 15:14:39.477587  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:39.477592  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:39.477597  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:39.477602  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:39.477607  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:39 GMT
	I1114 15:14:39.477913  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6xg9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"2304a457-3a85-4791-8d18-4e1262db399f","resourceVersion":"1023","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5726 chars]
	I1114 15:14:39.674778  847956 request.go:629] Waited for 196.306146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:14:39.674917  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:14:39.674928  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:39.674936  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:39.674943  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:39.677910  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:39.677933  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:39.677940  847956 round_trippers.go:580]     Audit-Id: baaeaf64-bba1-4b82-8498-a93eec591e80
	I1114 15:14:39.677946  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:39.677956  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:39.677961  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:39.677967  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:39.677972  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:39 GMT
	I1114 15:14:39.678076  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"5d9328d2-a334-4c14-8c25-db8d2fa4e56c","resourceVersion":"1003","creationTimestamp":"2023-11-14T15:14:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:14:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:14:37Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1114 15:14:39.678341  847956 pod_ready.go:92] pod "kube-proxy-6xg9v" in "kube-system" namespace has status "Ready":"True"
	I1114 15:14:39.678359  847956 pod_ready.go:81] duration metric: took 800.055608ms waiting for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:39.678367  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:39.875031  847956 request.go:629] Waited for 196.575481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:14:39.875107  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:14:39.875113  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:39.875121  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:39.875127  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:39.878639  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:14:39.878684  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:39.878697  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:39.878706  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:39.878719  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:39.878729  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:39 GMT
	I1114 15:14:39.878741  847956 round_trippers.go:580]     Audit-Id: 0128947d-d52f-4145-8add-98e6a6fa7568
	I1114 15:14:39.878750  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:39.878983  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m24mc","generateName":"kube-proxy-","namespace":"kube-system","uid":"73a6d4c8-2f95-4818-bc62-566099466b42","resourceVersion":"799","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5513 chars]
	I1114 15:14:40.074855  847956 request.go:629] Waited for 195.294057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:14:40.074923  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:14:40.074928  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:40.074936  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:40.074943  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:40.084662  847956 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1114 15:14:40.084695  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:40.084707  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:40.084717  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:40.084725  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:40.084733  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:40 GMT
	I1114 15:14:40.084760  847956 round_trippers.go:580]     Audit-Id: d66d14a7-6f09-4cbf-a0a9-f6ab6be0caea
	I1114 15:14:40.084772  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:40.085006  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:14:40.085355  847956 pod_ready.go:92] pod "kube-proxy-m24mc" in "kube-system" namespace has status "Ready":"True"
	I1114 15:14:40.085374  847956 pod_ready.go:81] duration metric: took 407.000839ms waiting for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:40.085383  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:40.274798  847956 request.go:629] Waited for 189.317654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:14:40.274889  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:14:40.274900  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:40.274913  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:40.274924  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:40.277899  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:40.277921  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:40.277928  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:40 GMT
	I1114 15:14:40.277938  847956 round_trippers.go:580]     Audit-Id: ed45a943-9dd5-4153-a643-84a465715514
	I1114 15:14:40.277948  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:40.277953  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:40.277958  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:40.277963  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:40.278175  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-627820","namespace":"kube-system","uid":"ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd","resourceVersion":"843","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.mirror":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.seen":"2023-11-14T15:02:19.515750784Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1114 15:14:40.475094  847956 request.go:629] Waited for 196.416432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:14:40.475163  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:14:40.475168  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:40.475177  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:40.475183  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:40.478121  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:14:40.478146  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:40.478157  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:40 GMT
	I1114 15:14:40.478165  847956 round_trippers.go:580]     Audit-Id: bc6e94b6-740f-4f05-bcc9-9eaeefad3061
	I1114 15:14:40.478172  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:40.478179  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:40.478187  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:40.478194  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:40.478301  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:14:40.478623  847956 pod_ready.go:92] pod "kube-scheduler-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:14:40.478640  847956 pod_ready.go:81] duration metric: took 393.250786ms waiting for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:14:40.478651  847956 pod_ready.go:38] duration metric: took 2.000416013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:14:40.478666  847956 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:14:40.478714  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:14:40.493317  847956 system_svc.go:56] duration metric: took 14.640309ms WaitForService to wait for kubelet.
	I1114 15:14:40.493350  847956 kubeadm.go:581] duration metric: took 2.0380563s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:14:40.493371  847956 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:14:40.675404  847956 request.go:629] Waited for 181.956016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes
	I1114 15:14:40.675513  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes
	I1114 15:14:40.675528  847956 round_trippers.go:469] Request Headers:
	I1114 15:14:40.675540  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:14:40.675552  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:14:40.680951  847956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 15:14:40.680976  847956 round_trippers.go:577] Response Headers:
	I1114 15:14:40.680984  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:14:40.680992  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:14:40.681000  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:14:40.681009  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:14:40.681021  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:14:40 GMT
	I1114 15:14:40.681035  847956 round_trippers.go:580]     Audit-Id: 0ae59eb6-7fe8-4803-a713-2b7f50e14ad7
	I1114 15:14:40.681297  847956 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1027"},"items":[{"metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15104 chars]
	I1114 15:14:40.682105  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:14:40.682134  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:14:40.682145  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:14:40.682151  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:14:40.682156  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:14:40.682162  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:14:40.682168  847956 node_conditions.go:105] duration metric: took 188.79161ms to run NodePressure ...
	I1114 15:14:40.682188  847956 start.go:228] waiting for startup goroutines ...
	I1114 15:14:40.682214  847956 start.go:242] writing updated cluster config ...
	I1114 15:14:40.682795  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:14:40.682955  847956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:14:40.685411  847956 out.go:177] * Starting worker node multinode-627820-m03 in cluster multinode-627820
	I1114 15:14:40.686716  847956 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:14:40.686746  847956 cache.go:56] Caching tarball of preloaded images
	I1114 15:14:40.686847  847956 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:14:40.686860  847956 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:14:40.686969  847956 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/config.json ...
	I1114 15:14:40.687162  847956 start.go:365] acquiring machines lock for multinode-627820-m03: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:14:40.687230  847956 start.go:369] acquired machines lock for "multinode-627820-m03" in 40.876µs
	I1114 15:14:40.687251  847956 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:14:40.687258  847956 fix.go:54] fixHost starting: m03
	I1114 15:14:40.687545  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:14:40.687599  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:14:40.702741  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I1114 15:14:40.703259  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:14:40.703765  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:14:40.703784  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:14:40.704112  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:14:40.704295  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .DriverName
	I1114 15:14:40.704439  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetState
	I1114 15:14:40.706121  847956 fix.go:102] recreateIfNeeded on multinode-627820-m03: state=Running err=<nil>
	W1114 15:14:40.706139  847956 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:14:40.708796  847956 out.go:177] * Updating the running kvm2 "multinode-627820-m03" VM ...
	I1114 15:14:40.710262  847956 machine.go:88] provisioning docker machine ...
	I1114 15:14:40.710289  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .DriverName
	I1114 15:14:40.710526  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetMachineName
	I1114 15:14:40.710713  847956 buildroot.go:166] provisioning hostname "multinode-627820-m03"
	I1114 15:14:40.710738  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetMachineName
	I1114 15:14:40.710896  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	I1114 15:14:40.713229  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:40.713696  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:14:40.713723  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:40.713889  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHPort
	I1114 15:14:40.714114  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:14:40.714563  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:14:40.714712  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHUsername
	I1114 15:14:40.714957  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:14:40.715286  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1114 15:14:40.715300  847956 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-627820-m03 && echo "multinode-627820-m03" | sudo tee /etc/hostname
	I1114 15:14:40.863332  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-627820-m03
	
	I1114 15:14:40.863365  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	I1114 15:14:40.866296  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:40.866654  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:14:40.866681  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:40.866916  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHPort
	I1114 15:14:40.867128  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:14:40.867372  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:14:40.867504  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHUsername
	I1114 15:14:40.867664  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:14:40.868155  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1114 15:14:40.868183  847956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-627820-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-627820-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-627820-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:14:41.001961  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:14:41.002014  847956 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:14:41.002036  847956 buildroot.go:174] setting up certificates
	I1114 15:14:41.002051  847956 provision.go:83] configureAuth start
	I1114 15:14:41.002125  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetMachineName
	I1114 15:14:41.002477  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetIP
	I1114 15:14:41.005361  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:41.005813  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:14:41.005843  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:41.006000  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	I1114 15:14:41.008224  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:41.008681  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:14:41.008713  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:41.008882  847956 provision.go:138] copyHostCerts
	I1114 15:14:41.008915  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:14:41.008950  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:14:41.008960  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:14:41.009029  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:14:41.009161  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:14:41.009183  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:14:41.009188  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:14:41.009217  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:14:41.009263  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:14:41.009281  847956 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:14:41.009285  847956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:14:41.009304  847956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:14:41.009351  847956 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.multinode-627820-m03 san=[192.168.39.221 192.168.39.221 localhost 127.0.0.1 minikube multinode-627820-m03]
	I1114 15:14:41.141783  847956 provision.go:172] copyRemoteCerts
	I1114 15:14:41.141845  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:14:41.141879  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	I1114 15:14:41.144861  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:41.145298  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:14:41.145327  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:41.145546  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHPort
	I1114 15:14:41.145773  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:14:41.145957  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHUsername
	I1114 15:14:41.146135  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m03/id_rsa Username:docker}
	I1114 15:14:41.243228  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 15:14:41.243305  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:14:41.267203  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 15:14:41.267290  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1114 15:14:41.290110  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 15:14:41.290180  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:14:41.312389  847956 provision.go:86] duration metric: configureAuth took 310.319826ms
	I1114 15:14:41.312423  847956 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:14:41.312672  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:14:41.312801  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	I1114 15:14:41.315939  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:41.316477  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:14:41.316512  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:14:41.316769  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHPort
	I1114 15:14:41.317063  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:14:41.317314  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:14:41.317544  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHUsername
	I1114 15:14:41.317744  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:14:41.318235  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1114 15:14:41.318260  847956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:16:11.839003  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:16:11.839134  847956 machine.go:91] provisioned docker machine in 1m31.128850362s
	I1114 15:16:11.839152  847956 start.go:300] post-start starting for "multinode-627820-m03" (driver="kvm2")
	I1114 15:16:11.839168  847956 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:16:11.839197  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .DriverName
	I1114 15:16:11.839698  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:16:11.839738  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	I1114 15:16:11.843131  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:11.843633  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:16:11.843670  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:11.843893  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHPort
	I1114 15:16:11.844106  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:16:11.844254  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHUsername
	I1114 15:16:11.844405  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m03/id_rsa Username:docker}
	I1114 15:16:11.939311  847956 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:16:11.943565  847956 command_runner.go:130] > NAME=Buildroot
	I1114 15:16:11.943593  847956 command_runner.go:130] > VERSION=2021.02.12-1-g9cb9327-dirty
	I1114 15:16:11.943600  847956 command_runner.go:130] > ID=buildroot
	I1114 15:16:11.943608  847956 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 15:16:11.943615  847956 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 15:16:11.943812  847956 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:16:11.943843  847956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:16:11.943925  847956 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:16:11.944037  847956 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:16:11.944048  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /etc/ssl/certs/8322112.pem
	I1114 15:16:11.944129  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:16:11.952584  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:16:11.977533  847956 start.go:303] post-start completed in 138.363414ms
	I1114 15:16:11.977562  847956 fix.go:56] fixHost completed within 1m31.290304613s
	I1114 15:16:11.977589  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	I1114 15:16:11.980485  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:11.981021  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:16:11.981049  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:11.981215  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHPort
	I1114 15:16:11.981441  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:16:11.981597  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:16:11.981731  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHUsername
	I1114 15:16:11.981886  847956 main.go:141] libmachine: Using SSH client type: native
	I1114 15:16:11.982213  847956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1114 15:16:11.982223  847956 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:16:12.114262  847956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699974972.106133302
	
	I1114 15:16:12.114296  847956 fix.go:206] guest clock: 1699974972.106133302
	I1114 15:16:12.114303  847956 fix.go:219] Guest: 2023-11-14 15:16:12.106133302 +0000 UTC Remote: 2023-11-14 15:16:11.977566394 +0000 UTC m=+556.491426874 (delta=128.566908ms)
	I1114 15:16:12.114323  847956 fix.go:190] guest clock delta is within tolerance: 128.566908ms
	I1114 15:16:12.114327  847956 start.go:83] releasing machines lock for "multinode-627820-m03", held for 1m31.427086124s
	I1114 15:16:12.114350  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .DriverName
	I1114 15:16:12.114651  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetIP
	I1114 15:16:12.117707  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:12.118129  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:16:12.118164  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:12.120408  847956 out.go:177] * Found network options:
	I1114 15:16:12.122099  847956 out.go:177]   - NO_PROXY=192.168.39.63,192.168.39.38
	W1114 15:16:12.123798  847956 proxy.go:119] fail to check proxy env: Error ip not in block
	W1114 15:16:12.123826  847956 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 15:16:12.123843  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .DriverName
	I1114 15:16:12.124550  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .DriverName
	I1114 15:16:12.124803  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .DriverName
	I1114 15:16:12.124915  847956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:16:12.124960  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	W1114 15:16:12.125035  847956 proxy.go:119] fail to check proxy env: Error ip not in block
	W1114 15:16:12.125056  847956 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 15:16:12.125151  847956 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:16:12.125175  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHHostname
	I1114 15:16:12.128019  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:12.128228  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:12.128455  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:16:12.128487  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:12.128666  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHPort
	I1114 15:16:12.128857  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:16:12.128858  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:16:12.128896  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:12.129027  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHUsername
	I1114 15:16:12.129153  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHPort
	I1114 15:16:12.129228  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m03/id_rsa Username:docker}
	I1114 15:16:12.129363  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHKeyPath
	I1114 15:16:12.129541  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetSSHUsername
	I1114 15:16:12.129751  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m03/id_rsa Username:docker}
	I1114 15:16:12.369114  847956 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 15:16:12.369120  847956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 15:16:12.375474  847956 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 15:16:12.375567  847956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:16:12.375645  847956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:16:12.385107  847956 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1114 15:16:12.385144  847956 start.go:472] detecting cgroup driver to use...
	I1114 15:16:12.385235  847956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:16:12.400672  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:16:12.414101  847956 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:16:12.414184  847956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:16:12.429298  847956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:16:12.444126  847956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:16:12.575828  847956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:16:12.699797  847956 docker.go:219] disabling docker service ...
	I1114 15:16:12.699881  847956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:16:12.715793  847956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:16:12.731403  847956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:16:12.862823  847956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:16:12.991032  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:16:13.005481  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:16:13.025095  847956 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1114 15:16:13.025158  847956 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:16:13.025237  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:16:13.036666  847956 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:16:13.036765  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:16:13.047880  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:16:13.062445  847956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:16:13.074117  847956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:16:13.084358  847956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:16:13.094724  847956 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1114 15:16:13.094835  847956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:16:13.105222  847956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:16:13.223273  847956 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:16:13.446498  847956 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:16:13.446593  847956 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:16:13.451813  847956 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1114 15:16:13.451854  847956 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 15:16:13.451866  847956 command_runner.go:130] > Device: 16h/22d	Inode: 1161        Links: 1
	I1114 15:16:13.451877  847956 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:16:13.451885  847956 command_runner.go:130] > Access: 2023-11-14 15:16:13.370073588 +0000
	I1114 15:16:13.451898  847956 command_runner.go:130] > Modify: 2023-11-14 15:16:13.370073588 +0000
	I1114 15:16:13.451909  847956 command_runner.go:130] > Change: 2023-11-14 15:16:13.370073588 +0000
	I1114 15:16:13.451917  847956 command_runner.go:130] >  Birth: -
	I1114 15:16:13.451936  847956 start.go:540] Will wait 60s for crictl version
	I1114 15:16:13.451985  847956 ssh_runner.go:195] Run: which crictl
	I1114 15:16:13.456077  847956 command_runner.go:130] > /usr/bin/crictl
	I1114 15:16:13.456252  847956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:16:13.503718  847956 command_runner.go:130] > Version:  0.1.0
	I1114 15:16:13.503745  847956 command_runner.go:130] > RuntimeName:  cri-o
	I1114 15:16:13.503750  847956 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1114 15:16:13.503790  847956 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 15:16:13.503814  847956 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:16:13.503900  847956 ssh_runner.go:195] Run: crio --version
	I1114 15:16:13.552789  847956 command_runner.go:130] > crio version 1.24.1
	I1114 15:16:13.552825  847956 command_runner.go:130] > Version:          1.24.1
	I1114 15:16:13.552837  847956 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:16:13.552845  847956 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:16:13.552854  847956 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:16:13.552863  847956 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:16:13.552870  847956 command_runner.go:130] > Compiler:         gc
	I1114 15:16:13.552878  847956 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:16:13.552887  847956 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:16:13.552901  847956 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:16:13.552924  847956 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:16:13.552935  847956 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:16:13.554387  847956 ssh_runner.go:195] Run: crio --version
	I1114 15:16:13.606272  847956 command_runner.go:130] > crio version 1.24.1
	I1114 15:16:13.606305  847956 command_runner.go:130] > Version:          1.24.1
	I1114 15:16:13.606312  847956 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1114 15:16:13.606317  847956 command_runner.go:130] > GitTreeState:     dirty
	I1114 15:16:13.606323  847956 command_runner.go:130] > BuildDate:        2023-11-09T04:38:27Z
	I1114 15:16:13.606329  847956 command_runner.go:130] > GoVersion:        go1.19.9
	I1114 15:16:13.606333  847956 command_runner.go:130] > Compiler:         gc
	I1114 15:16:13.606337  847956 command_runner.go:130] > Platform:         linux/amd64
	I1114 15:16:13.606343  847956 command_runner.go:130] > Linkmode:         dynamic
	I1114 15:16:13.606349  847956 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 15:16:13.606356  847956 command_runner.go:130] > SeccompEnabled:   true
	I1114 15:16:13.606360  847956 command_runner.go:130] > AppArmorEnabled:  false
	I1114 15:16:13.609302  847956 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:16:13.611342  847956 out.go:177]   - env NO_PROXY=192.168.39.63
	I1114 15:16:13.612891  847956 out.go:177]   - env NO_PROXY=192.168.39.63,192.168.39.38
	I1114 15:16:13.614161  847956 main.go:141] libmachine: (multinode-627820-m03) Calling .GetIP
	I1114 15:16:13.617160  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:13.617584  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:1c:12", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:04:37 +0000 UTC Type:0 Mac:52:54:00:de:1c:12 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-627820-m03 Clientid:01:52:54:00:de:1c:12}
	I1114 15:16:13.617608  847956 main.go:141] libmachine: (multinode-627820-m03) DBG | domain multinode-627820-m03 has defined IP address 192.168.39.221 and MAC address 52:54:00:de:1c:12 in network mk-multinode-627820
	I1114 15:16:13.617818  847956 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:16:13.622391  847956 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1114 15:16:13.622473  847956 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820 for IP: 192.168.39.221
	I1114 15:16:13.622517  847956 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:16:13.622686  847956 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:16:13.622723  847956 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:16:13.622738  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 15:16:13.622752  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 15:16:13.622765  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 15:16:13.622779  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 15:16:13.622830  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:16:13.622870  847956 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:16:13.622881  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:16:13.622907  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:16:13.622933  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:16:13.622956  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:16:13.622993  847956 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:16:13.623019  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:16:13.623031  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem -> /usr/share/ca-certificates/832211.pem
	I1114 15:16:13.623043  847956 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> /usr/share/ca-certificates/8322112.pem
	I1114 15:16:13.623482  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:16:13.649102  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:16:13.674134  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:16:13.700997  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:16:13.729958  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:16:13.753444  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:16:13.777542  847956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:16:13.803227  847956 ssh_runner.go:195] Run: openssl version
	I1114 15:16:13.809289  847956 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1114 15:16:13.809454  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:16:13.820436  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:16:13.825383  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:16:13.825599  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:16:13.825666  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:16:13.832134  847956 command_runner.go:130] > 3ec20f2e
	I1114 15:16:13.832465  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:16:13.842067  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:16:13.852358  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:16:13.857782  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:16:13.857821  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:16:13.857867  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:16:13.863804  847956 command_runner.go:130] > b5213941
	I1114 15:16:13.863892  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:16:13.873333  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:16:13.885261  847956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:16:13.910918  847956 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:16:13.910967  847956 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:16:13.911027  847956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:16:13.937717  847956 command_runner.go:130] > 51391683
	I1114 15:16:13.938771  847956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:16:13.960987  847956 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:16:13.975467  847956 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:16:13.975672  847956 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:16:13.975800  847956 ssh_runner.go:195] Run: crio config
	I1114 15:16:14.058120  847956 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1114 15:16:14.058150  847956 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1114 15:16:14.058157  847956 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1114 15:16:14.058161  847956 command_runner.go:130] > #
	I1114 15:16:14.058173  847956 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1114 15:16:14.058180  847956 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1114 15:16:14.058189  847956 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1114 15:16:14.058199  847956 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1114 15:16:14.058209  847956 command_runner.go:130] > # reload'.
	I1114 15:16:14.058217  847956 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1114 15:16:14.058226  847956 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1114 15:16:14.058243  847956 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1114 15:16:14.058254  847956 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1114 15:16:14.058259  847956 command_runner.go:130] > [crio]
	I1114 15:16:14.058265  847956 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1114 15:16:14.058276  847956 command_runner.go:130] > # containers images, in this directory.
	I1114 15:16:14.058314  847956 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1114 15:16:14.058334  847956 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1114 15:16:14.058551  847956 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1114 15:16:14.058577  847956 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1114 15:16:14.058588  847956 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1114 15:16:14.058803  847956 command_runner.go:130] > storage_driver = "overlay"
	I1114 15:16:14.058822  847956 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1114 15:16:14.058832  847956 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1114 15:16:14.058843  847956 command_runner.go:130] > storage_option = [
	I1114 15:16:14.059017  847956 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1114 15:16:14.059135  847956 command_runner.go:130] > ]
	I1114 15:16:14.059154  847956 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1114 15:16:14.059164  847956 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1114 15:16:14.059527  847956 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1114 15:16:14.059543  847956 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1114 15:16:14.059550  847956 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1114 15:16:14.059555  847956 command_runner.go:130] > # always happen on a node reboot
	I1114 15:16:14.060039  847956 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1114 15:16:14.060069  847956 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1114 15:16:14.060079  847956 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1114 15:16:14.060101  847956 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1114 15:16:14.060563  847956 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1114 15:16:14.060583  847956 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1114 15:16:14.060600  847956 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1114 15:16:14.061276  847956 command_runner.go:130] > # internal_wipe = true
	I1114 15:16:14.061298  847956 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1114 15:16:14.061309  847956 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1114 15:16:14.061317  847956 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1114 15:16:14.061788  847956 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1114 15:16:14.061811  847956 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1114 15:16:14.061818  847956 command_runner.go:130] > [crio.api]
	I1114 15:16:14.061827  847956 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1114 15:16:14.061856  847956 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1114 15:16:14.061870  847956 command_runner.go:130] > # IP address on which the stream server will listen.
	I1114 15:16:14.062558  847956 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1114 15:16:14.062585  847956 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1114 15:16:14.062594  847956 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1114 15:16:14.062601  847956 command_runner.go:130] > # stream_port = "0"
	I1114 15:16:14.062611  847956 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1114 15:16:14.063159  847956 command_runner.go:130] > # stream_enable_tls = false
	I1114 15:16:14.063177  847956 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1114 15:16:14.063618  847956 command_runner.go:130] > # stream_idle_timeout = ""
	I1114 15:16:14.063638  847956 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1114 15:16:14.063649  847956 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1114 15:16:14.063655  847956 command_runner.go:130] > # minutes.
	I1114 15:16:14.063935  847956 command_runner.go:130] > # stream_tls_cert = ""
	I1114 15:16:14.063993  847956 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1114 15:16:14.064014  847956 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1114 15:16:14.064488  847956 command_runner.go:130] > # stream_tls_key = ""
	I1114 15:16:14.064506  847956 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1114 15:16:14.064517  847956 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1114 15:16:14.064526  847956 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1114 15:16:14.066300  847956 command_runner.go:130] > # stream_tls_ca = ""
	I1114 15:16:14.066322  847956 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:16:14.066330  847956 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1114 15:16:14.066341  847956 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 15:16:14.066347  847956 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1114 15:16:14.066372  847956 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1114 15:16:14.066387  847956 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1114 15:16:14.066394  847956 command_runner.go:130] > [crio.runtime]
	I1114 15:16:14.066407  847956 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1114 15:16:14.066419  847956 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1114 15:16:14.066427  847956 command_runner.go:130] > # "nofile=1024:2048"
	I1114 15:16:14.066440  847956 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1114 15:16:14.066449  847956 command_runner.go:130] > # default_ulimits = [
	I1114 15:16:14.066455  847956 command_runner.go:130] > # ]
	I1114 15:16:14.066469  847956 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1114 15:16:14.066480  847956 command_runner.go:130] > # no_pivot = false
	I1114 15:16:14.066490  847956 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1114 15:16:14.066503  847956 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1114 15:16:14.066515  847956 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1114 15:16:14.066528  847956 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1114 15:16:14.066540  847956 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1114 15:16:14.066557  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:16:14.066569  847956 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1114 15:16:14.066577  847956 command_runner.go:130] > # Cgroup setting for conmon
	I1114 15:16:14.066591  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1114 15:16:14.066602  847956 command_runner.go:130] > conmon_cgroup = "pod"
	I1114 15:16:14.066615  847956 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1114 15:16:14.066627  847956 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1114 15:16:14.066637  847956 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 15:16:14.066647  847956 command_runner.go:130] > conmon_env = [
	I1114 15:16:14.066656  847956 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1114 15:16:14.066664  847956 command_runner.go:130] > ]
	I1114 15:16:14.066672  847956 command_runner.go:130] > # Additional environment variables to set for all the
	I1114 15:16:14.066682  847956 command_runner.go:130] > # containers. These are overridden if set in the
	I1114 15:16:14.066691  847956 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1114 15:16:14.066700  847956 command_runner.go:130] > # default_env = [
	I1114 15:16:14.066708  847956 command_runner.go:130] > # ]
	I1114 15:16:14.066717  847956 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1114 15:16:14.066725  847956 command_runner.go:130] > # selinux = false
	I1114 15:16:14.066735  847956 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1114 15:16:14.066747  847956 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1114 15:16:14.066756  847956 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1114 15:16:14.066765  847956 command_runner.go:130] > # seccomp_profile = ""
	I1114 15:16:14.066774  847956 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1114 15:16:14.066786  847956 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1114 15:16:14.066825  847956 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1114 15:16:14.066839  847956 command_runner.go:130] > # which might increase security.
	I1114 15:16:14.066847  847956 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1114 15:16:14.066857  847956 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1114 15:16:14.066870  847956 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1114 15:16:14.066882  847956 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1114 15:16:14.066892  847956 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1114 15:16:14.066904  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:16:14.066915  847956 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1114 15:16:14.066925  847956 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1114 15:16:14.066936  847956 command_runner.go:130] > # the cgroup blockio controller.
	I1114 15:16:14.066943  847956 command_runner.go:130] > # blockio_config_file = ""
	I1114 15:16:14.066957  847956 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1114 15:16:14.066967  847956 command_runner.go:130] > # irqbalance daemon.
	I1114 15:16:14.066976  847956 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1114 15:16:14.066988  847956 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1114 15:16:14.066997  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:16:14.067007  847956 command_runner.go:130] > # rdt_config_file = ""
	I1114 15:16:14.067016  847956 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1114 15:16:14.067026  847956 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1114 15:16:14.067087  847956 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1114 15:16:14.067105  847956 command_runner.go:130] > # separate_pull_cgroup = ""
	I1114 15:16:14.067115  847956 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1114 15:16:14.067128  847956 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1114 15:16:14.067138  847956 command_runner.go:130] > # will be added.
	I1114 15:16:14.067146  847956 command_runner.go:130] > # default_capabilities = [
	I1114 15:16:14.067154  847956 command_runner.go:130] > # 	"CHOWN",
	I1114 15:16:14.067161  847956 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1114 15:16:14.067171  847956 command_runner.go:130] > # 	"FSETID",
	I1114 15:16:14.067177  847956 command_runner.go:130] > # 	"FOWNER",
	I1114 15:16:14.067187  847956 command_runner.go:130] > # 	"SETGID",
	I1114 15:16:14.067194  847956 command_runner.go:130] > # 	"SETUID",
	I1114 15:16:14.067203  847956 command_runner.go:130] > # 	"SETPCAP",
	I1114 15:16:14.067210  847956 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1114 15:16:14.067219  847956 command_runner.go:130] > # 	"KILL",
	I1114 15:16:14.067224  847956 command_runner.go:130] > # ]
	I1114 15:16:14.067234  847956 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1114 15:16:14.067246  847956 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:16:14.067255  847956 command_runner.go:130] > # default_sysctls = [
	I1114 15:16:14.067261  847956 command_runner.go:130] > # ]
	I1114 15:16:14.067271  847956 command_runner.go:130] > # List of devices on the host that a
	I1114 15:16:14.067283  847956 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1114 15:16:14.067292  847956 command_runner.go:130] > # allowed_devices = [
	I1114 15:16:14.067298  847956 command_runner.go:130] > # 	"/dev/fuse",
	I1114 15:16:14.067306  847956 command_runner.go:130] > # ]
	I1114 15:16:14.067314  847956 command_runner.go:130] > # List of additional devices. specified as
	I1114 15:16:14.067329  847956 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1114 15:16:14.067341  847956 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1114 15:16:14.067385  847956 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 15:16:14.067396  847956 command_runner.go:130] > # additional_devices = [
	I1114 15:16:14.067401  847956 command_runner.go:130] > # ]
	I1114 15:16:14.067412  847956 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1114 15:16:14.067419  847956 command_runner.go:130] > # cdi_spec_dirs = [
	I1114 15:16:14.067425  847956 command_runner.go:130] > # 	"/etc/cdi",
	I1114 15:16:14.067431  847956 command_runner.go:130] > # 	"/var/run/cdi",
	I1114 15:16:14.067437  847956 command_runner.go:130] > # ]
	I1114 15:16:14.067447  847956 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1114 15:16:14.067460  847956 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1114 15:16:14.067469  847956 command_runner.go:130] > # Defaults to false.
	I1114 15:16:14.067478  847956 command_runner.go:130] > # device_ownership_from_security_context = false
	I1114 15:16:14.067491  847956 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1114 15:16:14.067503  847956 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1114 15:16:14.067512  847956 command_runner.go:130] > # hooks_dir = [
	I1114 15:16:14.067520  847956 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1114 15:16:14.067526  847956 command_runner.go:130] > # ]
	I1114 15:16:14.067536  847956 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1114 15:16:14.067549  847956 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1114 15:16:14.067559  847956 command_runner.go:130] > # its default mounts from the following two files:
	I1114 15:16:14.067566  847956 command_runner.go:130] > #
	I1114 15:16:14.067577  847956 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1114 15:16:14.067590  847956 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1114 15:16:14.067602  847956 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1114 15:16:14.067609  847956 command_runner.go:130] > #
	I1114 15:16:14.067618  847956 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1114 15:16:14.067629  847956 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1114 15:16:14.067642  847956 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1114 15:16:14.067652  847956 command_runner.go:130] > #      only add mounts it finds in this file.
	I1114 15:16:14.067658  847956 command_runner.go:130] > #
	I1114 15:16:14.067668  847956 command_runner.go:130] > # default_mounts_file = ""
	I1114 15:16:14.067677  847956 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1114 15:16:14.067690  847956 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1114 15:16:14.067699  847956 command_runner.go:130] > pids_limit = 1024
	I1114 15:16:14.067708  847956 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1114 15:16:14.067721  847956 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1114 15:16:14.067735  847956 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1114 15:16:14.067752  847956 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1114 15:16:14.067762  847956 command_runner.go:130] > # log_size_max = -1
	I1114 15:16:14.067773  847956 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1114 15:16:14.067782  847956 command_runner.go:130] > # log_to_journald = false
	I1114 15:16:14.067826  847956 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1114 15:16:14.067837  847956 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1114 15:16:14.067846  847956 command_runner.go:130] > # Path to directory for container attach sockets.
	I1114 15:16:14.067856  847956 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1114 15:16:14.067865  847956 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1114 15:16:14.067875  847956 command_runner.go:130] > # bind_mount_prefix = ""
	I1114 15:16:14.067884  847956 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1114 15:16:14.067894  847956 command_runner.go:130] > # read_only = false
	I1114 15:16:14.067906  847956 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1114 15:16:14.067919  847956 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1114 15:16:14.067929  847956 command_runner.go:130] > # live configuration reload.
	I1114 15:16:14.067943  847956 command_runner.go:130] > # log_level = "info"
	I1114 15:16:14.067956  847956 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1114 15:16:14.067964  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:16:14.067973  847956 command_runner.go:130] > # log_filter = ""
	I1114 15:16:14.067983  847956 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1114 15:16:14.067996  847956 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1114 15:16:14.068004  847956 command_runner.go:130] > # separated by comma.
	I1114 15:16:14.068010  847956 command_runner.go:130] > # uid_mappings = ""
	I1114 15:16:14.068023  847956 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1114 15:16:14.068034  847956 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1114 15:16:14.068043  847956 command_runner.go:130] > # separated by comma.
	I1114 15:16:14.068056  847956 command_runner.go:130] > # gid_mappings = ""
	I1114 15:16:14.068069  847956 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1114 15:16:14.068079  847956 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:16:14.068090  847956 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:16:14.068097  847956 command_runner.go:130] > # minimum_mappable_uid = -1
	I1114 15:16:14.068110  847956 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1114 15:16:14.068124  847956 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 15:16:14.068138  847956 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 15:16:14.068147  847956 command_runner.go:130] > # minimum_mappable_gid = -1
	I1114 15:16:14.068160  847956 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1114 15:16:14.068173  847956 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1114 15:16:14.068182  847956 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1114 15:16:14.068193  847956 command_runner.go:130] > # ctr_stop_timeout = 30
	I1114 15:16:14.068202  847956 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1114 15:16:14.068212  847956 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1114 15:16:14.068220  847956 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1114 15:16:14.068232  847956 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1114 15:16:14.068241  847956 command_runner.go:130] > drop_infra_ctr = false
	I1114 15:16:14.068250  847956 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1114 15:16:14.068264  847956 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1114 15:16:14.068279  847956 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1114 15:16:14.068289  847956 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1114 15:16:14.068299  847956 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1114 15:16:14.068310  847956 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1114 15:16:14.068318  847956 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1114 15:16:14.068332  847956 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1114 15:16:14.068343  847956 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1114 15:16:14.068354  847956 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1114 15:16:14.068367  847956 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1114 15:16:14.068380  847956 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1114 15:16:14.068387  847956 command_runner.go:130] > # default_runtime = "runc"
	I1114 15:16:14.068396  847956 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1114 15:16:14.068409  847956 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1114 15:16:14.068427  847956 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1114 15:16:14.068438  847956 command_runner.go:130] > # creation as a file is not desired either.
	I1114 15:16:14.068451  847956 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1114 15:16:14.068465  847956 command_runner.go:130] > # the hostname is being managed dynamically.
	I1114 15:16:14.068472  847956 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1114 15:16:14.068481  847956 command_runner.go:130] > # ]
	I1114 15:16:14.068491  847956 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1114 15:16:14.068504  847956 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1114 15:16:14.068516  847956 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1114 15:16:14.068529  847956 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1114 15:16:14.068536  847956 command_runner.go:130] > #
	I1114 15:16:14.068543  847956 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1114 15:16:14.068556  847956 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1114 15:16:14.068567  847956 command_runner.go:130] > #  runtime_type = "oci"
	I1114 15:16:14.068575  847956 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1114 15:16:14.068584  847956 command_runner.go:130] > #  privileged_without_host_devices = false
	I1114 15:16:14.068588  847956 command_runner.go:130] > #  allowed_annotations = []
	I1114 15:16:14.068595  847956 command_runner.go:130] > # Where:
	I1114 15:16:14.068600  847956 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1114 15:16:14.068606  847956 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1114 15:16:14.068619  847956 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1114 15:16:14.068627  847956 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1114 15:16:14.068632  847956 command_runner.go:130] > #   in $PATH.
	I1114 15:16:14.068638  847956 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1114 15:16:14.068644  847956 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1114 15:16:14.068650  847956 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1114 15:16:14.068656  847956 command_runner.go:130] > #   state.
	I1114 15:16:14.068662  847956 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1114 15:16:14.068668  847956 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1114 15:16:14.068677  847956 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1114 15:16:14.068683  847956 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1114 15:16:14.068689  847956 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1114 15:16:14.068696  847956 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1114 15:16:14.068703  847956 command_runner.go:130] > #   The currently recognized values are:
	I1114 15:16:14.068709  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1114 15:16:14.068719  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1114 15:16:14.068725  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1114 15:16:14.068731  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1114 15:16:14.068750  847956 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1114 15:16:14.068764  847956 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1114 15:16:14.068775  847956 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1114 15:16:14.068790  847956 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1114 15:16:14.068799  847956 command_runner.go:130] > #   should be moved to the container's cgroup
	I1114 15:16:14.068804  847956 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1114 15:16:14.068811  847956 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1114 15:16:14.068815  847956 command_runner.go:130] > runtime_type = "oci"
	I1114 15:16:14.068819  847956 command_runner.go:130] > runtime_root = "/run/runc"
	I1114 15:16:14.068825  847956 command_runner.go:130] > runtime_config_path = ""
	I1114 15:16:14.068831  847956 command_runner.go:130] > monitor_path = ""
	I1114 15:16:14.068835  847956 command_runner.go:130] > monitor_cgroup = ""
	I1114 15:16:14.068840  847956 command_runner.go:130] > monitor_exec_cgroup = ""
	I1114 15:16:14.068847  847956 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1114 15:16:14.068851  847956 command_runner.go:130] > # running containers
	I1114 15:16:14.068856  847956 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1114 15:16:14.068865  847956 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1114 15:16:14.068894  847956 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1114 15:16:14.068903  847956 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1114 15:16:14.068908  847956 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1114 15:16:14.068913  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1114 15:16:14.068918  847956 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1114 15:16:14.068925  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1114 15:16:14.068931  847956 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1114 15:16:14.068937  847956 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1114 15:16:14.068943  847956 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1114 15:16:14.068949  847956 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1114 15:16:14.068957  847956 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1114 15:16:14.068965  847956 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1114 15:16:14.068980  847956 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1114 15:16:14.068993  847956 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1114 15:16:14.069009  847956 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1114 15:16:14.069025  847956 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1114 15:16:14.069035  847956 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1114 15:16:14.069054  847956 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1114 15:16:14.069062  847956 command_runner.go:130] > # Example:
	I1114 15:16:14.069071  847956 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1114 15:16:14.069079  847956 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1114 15:16:14.069085  847956 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1114 15:16:14.069092  847956 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1114 15:16:14.069096  847956 command_runner.go:130] > # cpuset = 0
	I1114 15:16:14.069103  847956 command_runner.go:130] > # cpushares = "0-1"
	I1114 15:16:14.069107  847956 command_runner.go:130] > # Where:
	I1114 15:16:14.069111  847956 command_runner.go:130] > # The workload name is workload-type.
	I1114 15:16:14.069120  847956 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1114 15:16:14.069126  847956 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1114 15:16:14.069133  847956 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1114 15:16:14.069143  847956 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1114 15:16:14.069149  847956 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1114 15:16:14.069155  847956 command_runner.go:130] > # 
	I1114 15:16:14.069164  847956 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1114 15:16:14.069170  847956 command_runner.go:130] > #
	I1114 15:16:14.069176  847956 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1114 15:16:14.069183  847956 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1114 15:16:14.069190  847956 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1114 15:16:14.069198  847956 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1114 15:16:14.069204  847956 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1114 15:16:14.069208  847956 command_runner.go:130] > [crio.image]
	I1114 15:16:14.069214  847956 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1114 15:16:14.069219  847956 command_runner.go:130] > # default_transport = "docker://"
	I1114 15:16:14.069228  847956 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1114 15:16:14.069236  847956 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:16:14.069241  847956 command_runner.go:130] > # global_auth_file = ""
	I1114 15:16:14.069248  847956 command_runner.go:130] > # The image used to instantiate infra containers.
	I1114 15:16:14.069254  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:16:14.069261  847956 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1114 15:16:14.069267  847956 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1114 15:16:14.069274  847956 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1114 15:16:14.069279  847956 command_runner.go:130] > # This option supports live configuration reload.
	I1114 15:16:14.069286  847956 command_runner.go:130] > # pause_image_auth_file = ""
	I1114 15:16:14.069293  847956 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1114 15:16:14.069301  847956 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1114 15:16:14.069307  847956 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1114 15:16:14.069314  847956 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1114 15:16:14.069319  847956 command_runner.go:130] > # pause_command = "/pause"
	I1114 15:16:14.069327  847956 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1114 15:16:14.069333  847956 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1114 15:16:14.069340  847956 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1114 15:16:14.069346  847956 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1114 15:16:14.069354  847956 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1114 15:16:14.069358  847956 command_runner.go:130] > # signature_policy = ""
	I1114 15:16:14.069364  847956 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1114 15:16:14.069373  847956 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1114 15:16:14.069378  847956 command_runner.go:130] > # changing them here.
	I1114 15:16:14.069384  847956 command_runner.go:130] > # insecure_registries = [
	I1114 15:16:14.069388  847956 command_runner.go:130] > # ]
	I1114 15:16:14.069396  847956 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1114 15:16:14.069401  847956 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1114 15:16:14.069408  847956 command_runner.go:130] > # image_volumes = "mkdir"
	I1114 15:16:14.069413  847956 command_runner.go:130] > # Temporary directory to use for storing big files
	I1114 15:16:14.069417  847956 command_runner.go:130] > # big_files_temporary_dir = ""
	I1114 15:16:14.069423  847956 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1114 15:16:14.069428  847956 command_runner.go:130] > # CNI plugins.
	I1114 15:16:14.069431  847956 command_runner.go:130] > [crio.network]
	I1114 15:16:14.069437  847956 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1114 15:16:14.069445  847956 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1114 15:16:14.069450  847956 command_runner.go:130] > # cni_default_network = ""
	I1114 15:16:14.069455  847956 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1114 15:16:14.069463  847956 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1114 15:16:14.069468  847956 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1114 15:16:14.069475  847956 command_runner.go:130] > # plugin_dirs = [
	I1114 15:16:14.069479  847956 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1114 15:16:14.069483  847956 command_runner.go:130] > # ]
	I1114 15:16:14.069489  847956 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1114 15:16:14.069495  847956 command_runner.go:130] > [crio.metrics]
	I1114 15:16:14.069500  847956 command_runner.go:130] > # Globally enable or disable metrics support.
	I1114 15:16:14.069504  847956 command_runner.go:130] > enable_metrics = true
	I1114 15:16:14.069511  847956 command_runner.go:130] > # Specify enabled metrics collectors.
	I1114 15:16:14.069516  847956 command_runner.go:130] > # Per default all metrics are enabled.
	I1114 15:16:14.069525  847956 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1114 15:16:14.069531  847956 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1114 15:16:14.069539  847956 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1114 15:16:14.069543  847956 command_runner.go:130] > # metrics_collectors = [
	I1114 15:16:14.069547  847956 command_runner.go:130] > # 	"operations",
	I1114 15:16:14.069552  847956 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1114 15:16:14.069559  847956 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1114 15:16:14.069564  847956 command_runner.go:130] > # 	"operations_errors",
	I1114 15:16:14.069571  847956 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1114 15:16:14.069576  847956 command_runner.go:130] > # 	"image_pulls_by_name",
	I1114 15:16:14.069581  847956 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1114 15:16:14.069585  847956 command_runner.go:130] > # 	"image_pulls_failures",
	I1114 15:16:14.069592  847956 command_runner.go:130] > # 	"image_pulls_successes",
	I1114 15:16:14.069596  847956 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1114 15:16:14.069600  847956 command_runner.go:130] > # 	"image_layer_reuse",
	I1114 15:16:14.069607  847956 command_runner.go:130] > # 	"containers_oom_total",
	I1114 15:16:14.069610  847956 command_runner.go:130] > # 	"containers_oom",
	I1114 15:16:14.069614  847956 command_runner.go:130] > # 	"processes_defunct",
	I1114 15:16:14.069621  847956 command_runner.go:130] > # 	"operations_total",
	I1114 15:16:14.069625  847956 command_runner.go:130] > # 	"operations_latency_seconds",
	I1114 15:16:14.069630  847956 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1114 15:16:14.069634  847956 command_runner.go:130] > # 	"operations_errors_total",
	I1114 15:16:14.069641  847956 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1114 15:16:14.069645  847956 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1114 15:16:14.069652  847956 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1114 15:16:14.069656  847956 command_runner.go:130] > # 	"image_pulls_success_total",
	I1114 15:16:14.069660  847956 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1114 15:16:14.069667  847956 command_runner.go:130] > # 	"containers_oom_count_total",
	I1114 15:16:14.069671  847956 command_runner.go:130] > # ]
	I1114 15:16:14.069676  847956 command_runner.go:130] > # The port on which the metrics server will listen.
	I1114 15:16:14.069680  847956 command_runner.go:130] > # metrics_port = 9090
	I1114 15:16:14.069685  847956 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1114 15:16:14.069692  847956 command_runner.go:130] > # metrics_socket = ""
	I1114 15:16:14.069697  847956 command_runner.go:130] > # The certificate for the secure metrics server.
	I1114 15:16:14.069705  847956 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1114 15:16:14.069713  847956 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1114 15:16:14.069717  847956 command_runner.go:130] > # certificate on any modification event.
	I1114 15:16:14.069721  847956 command_runner.go:130] > # metrics_cert = ""
	I1114 15:16:14.069727  847956 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1114 15:16:14.069734  847956 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1114 15:16:14.069738  847956 command_runner.go:130] > # metrics_key = ""
	I1114 15:16:14.069744  847956 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1114 15:16:14.069750  847956 command_runner.go:130] > [crio.tracing]
	I1114 15:16:14.069756  847956 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1114 15:16:14.069760  847956 command_runner.go:130] > # enable_tracing = false
	I1114 15:16:14.069766  847956 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1114 15:16:14.069771  847956 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1114 15:16:14.069777  847956 command_runner.go:130] > # Number of samples to collect per million spans.
	I1114 15:16:14.069783  847956 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1114 15:16:14.069789  847956 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1114 15:16:14.069795  847956 command_runner.go:130] > [crio.stats]
	I1114 15:16:14.069801  847956 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1114 15:16:14.069806  847956 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1114 15:16:14.069813  847956 command_runner.go:130] > # stats_collection_period = 0
	I1114 15:16:14.069852  847956 command_runner.go:130] ! time="2023-11-14 15:16:14.047447593Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1114 15:16:14.069865  847956 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1114 15:16:14.069929  847956 cni.go:84] Creating CNI manager for ""
	I1114 15:16:14.069939  847956 cni.go:136] 3 nodes found, recommending kindnet
	I1114 15:16:14.069949  847956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:16:14.069970  847956 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-627820 NodeName:multinode-627820-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:16:14.070120  847956 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-627820-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:16:14.070181  847956 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-627820-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:16:14.070238  847956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:16:14.080408  847956 command_runner.go:130] > kubeadm
	I1114 15:16:14.080451  847956 command_runner.go:130] > kubectl
	I1114 15:16:14.080456  847956 command_runner.go:130] > kubelet
	I1114 15:16:14.080487  847956 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:16:14.080559  847956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1114 15:16:14.094688  847956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1114 15:16:14.119617  847956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:16:14.137639  847956 ssh_runner.go:195] Run: grep 192.168.39.63	control-plane.minikube.internal$ /etc/hosts
	I1114 15:16:14.141543  847956 command_runner.go:130] > 192.168.39.63	control-plane.minikube.internal
	I1114 15:16:14.141719  847956 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:16:14.142115  847956 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:16:14.142196  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:16:14.142248  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:16:14.157606  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42597
	I1114 15:16:14.158103  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:16:14.158590  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:16:14.158610  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:16:14.158951  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:16:14.159148  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:16:14.159335  847956 start.go:304] JoinCluster: &{Name:multinode-627820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-627820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.38 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:16:14.159512  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1114 15:16:14.159536  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:16:14.162634  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:16:14.163168  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:16:14.163194  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:16:14.163389  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:16:14.163599  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:16:14.163770  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:16:14.163943  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:16:14.375926  847956 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token upipm3.imril4bww44o1zj1 --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:16:14.375977  847956 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1114 15:16:14.376030  847956 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:16:14.376368  847956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:16:14.376418  847956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:16:14.391484  847956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39471
	I1114 15:16:14.391945  847956 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:16:14.392475  847956 main.go:141] libmachine: Using API Version  1
	I1114 15:16:14.392497  847956 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:16:14.392837  847956 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:16:14.393042  847956 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:16:14.393255  847956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-627820-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1114 15:16:14.393289  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:16:14.396062  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:16:14.396524  847956 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:16:14.396549  847956 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:16:14.396820  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:16:14.397011  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:16:14.397196  847956 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:16:14.397374  847956 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:16:14.598298  847956 command_runner.go:130] > node/multinode-627820-m03 cordoned
	I1114 15:16:17.639201  847956 command_runner.go:130] > pod "busybox-5bc68d56bd-p5lnm" has DeletionTimestamp older than 1 seconds, skipping
	I1114 15:16:17.639233  847956 command_runner.go:130] > node/multinode-627820-m03 drained
	I1114 15:16:17.641299  847956 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1114 15:16:17.641327  847956 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-8wr7d, kube-system/kube-proxy-4hf2k
	I1114 15:16:17.641348  847956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-627820-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.248070739s)
	I1114 15:16:17.641363  847956 node.go:108] successfully drained node "m03"
	I1114 15:16:17.641894  847956 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:16:17.642242  847956 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:16:17.642594  847956 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1114 15:16:17.642661  847956 round_trippers.go:463] DELETE https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:16:17.642668  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:17.642676  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:17.642682  847956 round_trippers.go:473]     Content-Type: application/json
	I1114 15:16:17.642690  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:17.654938  847956 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1114 15:16:17.654979  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:17.654991  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:17 GMT
	I1114 15:16:17.655000  847956 round_trippers.go:580]     Audit-Id: 648634e6-f3a0-448e-9ac3-7b2db17d47fd
	I1114 15:16:17.655008  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:17.655020  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:17.655027  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:17.655035  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:17.655042  847956 round_trippers.go:580]     Content-Length: 171
	I1114 15:16:17.655076  847956 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-627820-m03","kind":"nodes","uid":"019405fb-baac-496b-96ae-131218281f18"}}
	I1114 15:16:17.655118  847956 node.go:124] successfully deleted node "m03"
	I1114 15:16:17.655130  847956 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1114 15:16:17.655156  847956 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1114 15:16:17.655181  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token upipm3.imril4bww44o1zj1 --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-627820-m03"
	I1114 15:16:17.742781  847956 command_runner.go:130] > [preflight] Running pre-flight checks
	I1114 15:16:17.906843  847956 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1114 15:16:17.906883  847956 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1114 15:16:17.971637  847956 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:16:17.971672  847956 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:16:17.971679  847956 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 15:16:18.106577  847956 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1114 15:16:18.633414  847956 command_runner.go:130] > This node has joined the cluster:
	I1114 15:16:18.633442  847956 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1114 15:16:18.633448  847956 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1114 15:16:18.633455  847956 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1114 15:16:18.636200  847956 command_runner.go:130] ! W1114 15:16:17.734758    2478 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1114 15:16:18.636227  847956 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1114 15:16:18.636240  847956 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1114 15:16:18.636253  847956 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1114 15:16:18.636598  847956 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1114 15:16:18.920723  847956 start.go:306] JoinCluster complete in 4.761383857s
	I1114 15:16:18.920761  847956 cni.go:84] Creating CNI manager for ""
	I1114 15:16:18.920770  847956 cni.go:136] 3 nodes found, recommending kindnet
	I1114 15:16:18.920835  847956 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 15:16:18.928121  847956 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 15:16:18.928154  847956 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1114 15:16:18.928164  847956 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1114 15:16:18.928173  847956 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 15:16:18.928181  847956 command_runner.go:130] > Access: 2023-11-14 15:12:06.839117816 +0000
	I1114 15:16:18.928189  847956 command_runner.go:130] > Modify: 2023-11-09 04:45:09.000000000 +0000
	I1114 15:16:18.928196  847956 command_runner.go:130] > Change: 2023-11-14 15:12:04.750117816 +0000
	I1114 15:16:18.928203  847956 command_runner.go:130] >  Birth: -
	I1114 15:16:18.928272  847956 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 15:16:18.928286  847956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 15:16:18.947717  847956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 15:16:19.306975  847956 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1114 15:16:19.311381  847956 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1114 15:16:19.316078  847956 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1114 15:16:19.326483  847956 command_runner.go:130] > daemonset.apps/kindnet configured
	I1114 15:16:19.329416  847956 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:16:19.329686  847956 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:16:19.330012  847956 round_trippers.go:463] GET https://192.168.39.63:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 15:16:19.330027  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.330038  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.330047  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.332359  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:16:19.332383  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.332393  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.332401  847956 round_trippers.go:580]     Audit-Id: 80f2d3ce-3071-49b6-937c-9bb316ac04c8
	I1114 15:16:19.332409  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.332416  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.332428  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.332441  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.332448  847956 round_trippers.go:580]     Content-Length: 291
	I1114 15:16:19.332476  847956 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"57bccca2-f0e4-486c-b5a0-3985938d2dae","resourceVersion":"855","creationTimestamp":"2023-11-14T15:02:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1114 15:16:19.332612  847956 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-627820" context rescaled to 1 replicas
	I1114 15:16:19.332648  847956 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.221 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1114 15:16:19.335436  847956 out.go:177] * Verifying Kubernetes components...
	I1114 15:16:19.336890  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:16:19.351707  847956 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:16:19.352056  847956 kapi.go:59] client config for multinode-627820: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/multinode-627820/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:16:19.352332  847956 node_ready.go:35] waiting up to 6m0s for node "multinode-627820-m03" to be "Ready" ...
	I1114 15:16:19.352416  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:16:19.352425  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.352433  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.352442  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.355337  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:16:19.355364  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.355373  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.355381  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.355388  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.355395  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.355403  847956 round_trippers.go:580]     Audit-Id: ce9f1d53-482a-474f-ad8f-20164fbd9062
	I1114 15:16:19.355412  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.355574  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m03","uid":"ae61b854-86e3-415c-8f53-64e10c4d7cae","resourceVersion":"1189","creationTimestamp":"2023-11-14T15:16:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:16:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:16:18Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1114 15:16:19.355891  847956 node_ready.go:49] node "multinode-627820-m03" has status "Ready":"True"
	I1114 15:16:19.355909  847956 node_ready.go:38] duration metric: took 3.561837ms waiting for node "multinode-627820-m03" to be "Ready" ...
	I1114 15:16:19.355918  847956 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:16:19.355971  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods
	I1114 15:16:19.355979  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.355987  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.355992  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.360316  847956 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 15:16:19.360343  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.360353  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.360364  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.360371  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.360379  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.360387  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.360399  847956 round_trippers.go:580]     Audit-Id: 094ccdec-2e03-4f57-b392-6b042bf22894
	I1114 15:16:19.362364  847956 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1195"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"851","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81880 chars]
	I1114 15:16:19.365301  847956 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.365403  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vh8ng
	I1114 15:16:19.365413  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.365421  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.365427  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.367681  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:16:19.367696  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.367703  847956 round_trippers.go:580]     Audit-Id: 4c5eb7f0-07a1-46d0-83e9-b4e337b89132
	I1114 15:16:19.367710  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.367717  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.367725  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.367733  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.367744  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.368013  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vh8ng","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"25afe3b4-014e-4180-9597-fb237d622c81","resourceVersion":"851","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deb1520c-2769-4f29-8152-ddb701ff98f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deb1520c-2769-4f29-8152-ddb701ff98f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1114 15:16:19.368461  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:16:19.368477  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.368484  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.368490  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.370802  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:16:19.370818  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.370826  847956 round_trippers.go:580]     Audit-Id: 8cccd606-fdfe-4afa-9882-77adbf98b478
	I1114 15:16:19.370831  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.370837  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.370846  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.370857  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.370863  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.371055  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:16:19.371388  847956 pod_ready.go:92] pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace has status "Ready":"True"
	I1114 15:16:19.371408  847956 pod_ready.go:81] duration metric: took 6.07695ms waiting for pod "coredns-5dd5756b68-vh8ng" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.371416  847956 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.371468  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-627820
	I1114 15:16:19.371476  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.371483  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.371489  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.373533  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:16:19.373556  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.373565  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.373574  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.373582  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.373590  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.373600  847956 round_trippers.go:580]     Audit-Id: 8c072cd5-330c-487e-adb5-8ec1e9c69735
	I1114 15:16:19.373608  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.373924  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-627820","namespace":"kube-system","uid":"f7ab1cba-820a-4cad-8607-dcf55b587b77","resourceVersion":"817","creationTimestamp":"2023-11-14T15:02:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.63:2379","kubernetes.io/config.hash":"9e94d5d69871d944e272883491976489","kubernetes.io/config.mirror":"9e94d5d69871d944e272883491976489","kubernetes.io/config.seen":"2023-11-14T15:02:10.404956486Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1114 15:16:19.374288  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:16:19.374303  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.374313  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.374322  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.376292  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:16:19.376313  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.376322  847956 round_trippers.go:580]     Audit-Id: 49cc6e83-65bb-4da0-8eab-9b7184e07a0a
	I1114 15:16:19.376331  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.376340  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.376347  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.376355  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.376367  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.376632  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:16:19.376930  847956 pod_ready.go:92] pod "etcd-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:16:19.376947  847956 pod_ready.go:81] duration metric: took 5.523496ms waiting for pod "etcd-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.376969  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.377030  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-627820
	I1114 15:16:19.377040  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.377050  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.377068  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.379965  847956 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 15:16:19.379984  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.380004  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.380013  847956 round_trippers.go:580]     Audit-Id: 31c4a2e9-3f3d-4c8b-99fc-e495dc4286d8
	I1114 15:16:19.380025  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.380033  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.380042  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.380050  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.380181  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-627820","namespace":"kube-system","uid":"8a9b9224-3446-46f7-b525-e1f32bb9a33c","resourceVersion":"826","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.63:8443","kubernetes.io/config.hash":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.mirror":"618073575d26c84596a59c7ddac9e2b1","kubernetes.io/config.seen":"2023-11-14T15:02:19.515752674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1114 15:16:19.380668  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:16:19.380685  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.380692  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.380698  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.382540  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:16:19.382560  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.382570  847956 round_trippers.go:580]     Audit-Id: de707816-3256-4091-813e-7c4a55ee6470
	I1114 15:16:19.382579  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.382590  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.382598  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.382608  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.382617  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.382798  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:16:19.383149  847956 pod_ready.go:92] pod "kube-apiserver-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:16:19.383167  847956 pod_ready.go:81] duration metric: took 6.190367ms waiting for pod "kube-apiserver-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.383177  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.383226  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-627820
	I1114 15:16:19.383233  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.383240  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.383245  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.385240  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:16:19.385258  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.385264  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.385270  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.385275  847956 round_trippers.go:580]     Audit-Id: 239c7e95-c326-46f7-bdc1-37d5f651e0df
	I1114 15:16:19.385280  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.385288  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.385293  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.385507  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-627820","namespace":"kube-system","uid":"b4440d06-27f9-4455-ae59-2d8c744b99a2","resourceVersion":"816","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.mirror":"b103d6782e9472dc1801b82c4447b3dd","kubernetes.io/config.seen":"2023-11-14T15:02:19.515747223Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1114 15:16:19.385929  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:16:19.385944  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.385951  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.385956  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.387952  847956 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 15:16:19.387971  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.387980  847956 round_trippers.go:580]     Audit-Id: 7a76c10b-35fa-4c6f-94f2-4a4ff750677e
	I1114 15:16:19.387988  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.387996  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.388004  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.388012  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.388021  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.388210  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:16:19.388592  847956 pod_ready.go:92] pod "kube-controller-manager-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:16:19.388615  847956 pod_ready.go:81] duration metric: took 5.431081ms waiting for pod "kube-controller-manager-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.388629  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4hf2k" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.553035  847956 request.go:629] Waited for 164.327682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4hf2k
	I1114 15:16:19.553121  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4hf2k
	I1114 15:16:19.553127  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.553135  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.553144  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.556362  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:16:19.556386  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.556393  847956 round_trippers.go:580]     Audit-Id: e140a085-18de-441f-92f2-cb961574e1e2
	I1114 15:16:19.556399  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.556404  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.556409  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.556417  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.556425  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.556702  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4hf2k","generateName":"kube-proxy-","namespace":"kube-system","uid":"205bb9ac-4540-41d6-adb8-078c02d91b4e","resourceVersion":"1168","creationTimestamp":"2023-11-14T15:04:00Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5730 chars]
	I1114 15:16:19.753532  847956 request.go:629] Waited for 196.330497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:16:19.753609  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m03
	I1114 15:16:19.753614  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.753622  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.753629  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.756934  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:16:19.756961  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.756972  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.756979  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.756986  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.756996  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.757016  847956 round_trippers.go:580]     Audit-Id: 42381719-fc3e-47d0-a3f6-8837d64d6610
	I1114 15:16:19.757041  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.757243  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m03","uid":"ae61b854-86e3-415c-8f53-64e10c4d7cae","resourceVersion":"1189","creationTimestamp":"2023-11-14T15:16:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:16:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:16:18Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1114 15:16:19.757544  847956 pod_ready.go:92] pod "kube-proxy-4hf2k" in "kube-system" namespace has status "Ready":"True"
	I1114 15:16:19.757565  847956 pod_ready.go:81] duration metric: took 368.922319ms waiting for pod "kube-proxy-4hf2k" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.757575  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:19.953060  847956 request.go:629] Waited for 195.393657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:16:19.953197  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6xg9v
	I1114 15:16:19.953222  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:19.953235  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:19.953247  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:19.956528  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:16:19.956556  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:19.956568  847956 round_trippers.go:580]     Audit-Id: 411f4edb-cdc8-44ab-a457-b54c329e05f6
	I1114 15:16:19.956576  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:19.956584  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:19.956592  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:19.956600  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:19.956607  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:19 GMT
	I1114 15:16:19.956966  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6xg9v","generateName":"kube-proxy-","namespace":"kube-system","uid":"2304a457-3a85-4791-8d18-4e1262db399f","resourceVersion":"1023","creationTimestamp":"2023-11-14T15:03:12Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:03:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5726 chars]
	I1114 15:16:20.152868  847956 request.go:629] Waited for 195.435384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:16:20.152970  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820-m02
	I1114 15:16:20.152979  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:20.152992  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:20.153005  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:20.156137  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:16:20.156161  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:20.156171  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:20 GMT
	I1114 15:16:20.156179  847956 round_trippers.go:580]     Audit-Id: ffe39be7-5aef-4098-bebd-3e634fb48273
	I1114 15:16:20.156190  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:20.156196  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:20.156202  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:20.156209  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:20.156455  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820-m02","uid":"5d9328d2-a334-4c14-8c25-db8d2fa4e56c","resourceVersion":"1003","creationTimestamp":"2023-11-14T15:14:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:14:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:14:37Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1114 15:16:20.156856  847956 pod_ready.go:92] pod "kube-proxy-6xg9v" in "kube-system" namespace has status "Ready":"True"
	I1114 15:16:20.156878  847956 pod_ready.go:81] duration metric: took 399.296096ms waiting for pod "kube-proxy-6xg9v" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:20.156891  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:20.353162  847956 request.go:629] Waited for 196.195409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:16:20.353239  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m24mc
	I1114 15:16:20.353244  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:20.353253  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:20.353266  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:20.356825  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:16:20.356854  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:20.356865  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:20.356874  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:20.356883  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:20.356890  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:20 GMT
	I1114 15:16:20.356895  847956 round_trippers.go:580]     Audit-Id: cecafaab-fc16-463c-a5f9-44d4699b02b4
	I1114 15:16:20.356900  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:20.357399  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m24mc","generateName":"kube-proxy-","namespace":"kube-system","uid":"73a6d4c8-2f95-4818-bc62-566099466b42","resourceVersion":"799","creationTimestamp":"2023-11-14T15:02:31Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae9b06e1-d76d-4f74-937e-be563d51c152","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae9b06e1-d76d-4f74-937e-be563d51c152\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5513 chars]
	I1114 15:16:20.552956  847956 request.go:629] Waited for 195.075715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:16:20.553036  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:16:20.553044  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:20.553055  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:20.553066  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:20.556165  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:16:20.556189  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:20.556196  847956 round_trippers.go:580]     Audit-Id: 4a7c4f20-090f-4c5c-95ac-b2432c7b5f20
	I1114 15:16:20.556202  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:20.556207  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:20.556212  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:20.556220  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:20.556228  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:20 GMT
	I1114 15:16:20.556433  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:16:20.556944  847956 pod_ready.go:92] pod "kube-proxy-m24mc" in "kube-system" namespace has status "Ready":"True"
	I1114 15:16:20.556967  847956 pod_ready.go:81] duration metric: took 400.067968ms waiting for pod "kube-proxy-m24mc" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:20.556980  847956 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:20.752839  847956 request.go:629] Waited for 195.781373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:16:20.752935  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-627820
	I1114 15:16:20.752943  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:20.752955  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:20.752966  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:20.756035  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:16:20.756063  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:20.756070  847956 round_trippers.go:580]     Audit-Id: f2b205d9-99f0-4dbf-8806-e4fde2dd1622
	I1114 15:16:20.756075  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:20.756088  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:20.756099  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:20.756108  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:20.756118  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:20 GMT
	I1114 15:16:20.756295  847956 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-627820","namespace":"kube-system","uid":"ddbaeac6-28b3-4be5-b8ec-0fd95cf570fd","resourceVersion":"843","creationTimestamp":"2023-11-14T15:02:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.mirror":"7cc53a6a3186a398cdb1e8e8d082916a","kubernetes.io/config.seen":"2023-11-14T15:02:19.515750784Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T15:02:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1114 15:16:20.953114  847956 request.go:629] Waited for 196.382234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:16:20.953198  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes/multinode-627820
	I1114 15:16:20.953206  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:20.953218  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:20.953236  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:20.958471  847956 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 15:16:20.958498  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:20.958508  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:20.958515  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:20 GMT
	I1114 15:16:20.958523  847956 round_trippers.go:580]     Audit-Id: ae16f8f3-0eb7-4679-aefd-ab928df3df86
	I1114 15:16:20.958530  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:20.958537  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:20.958556  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:20.958778  847956 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T15:02:15Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1114 15:16:20.959111  847956 pod_ready.go:92] pod "kube-scheduler-multinode-627820" in "kube-system" namespace has status "Ready":"True"
	I1114 15:16:20.959131  847956 pod_ready.go:81] duration metric: took 402.142677ms waiting for pod "kube-scheduler-multinode-627820" in "kube-system" namespace to be "Ready" ...
	I1114 15:16:20.959146  847956 pod_ready.go:38] duration metric: took 1.603218562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:16:20.959168  847956 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:16:20.959233  847956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:16:20.973829  847956 system_svc.go:56] duration metric: took 14.653382ms WaitForService to wait for kubelet.
	I1114 15:16:20.973868  847956 kubeadm.go:581] duration metric: took 1.641195648s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:16:20.973889  847956 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:16:21.153395  847956 request.go:629] Waited for 179.421541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.63:8443/api/v1/nodes
	I1114 15:16:21.153486  847956 round_trippers.go:463] GET https://192.168.39.63:8443/api/v1/nodes
	I1114 15:16:21.153493  847956 round_trippers.go:469] Request Headers:
	I1114 15:16:21.153504  847956 round_trippers.go:473]     Accept: application/json, */*
	I1114 15:16:21.153513  847956 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 15:16:21.156908  847956 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 15:16:21.156936  847956 round_trippers.go:577] Response Headers:
	I1114 15:16:21.156945  847956 round_trippers.go:580]     Audit-Id: 21e49334-2c21-478d-a22c-fd3c4fe28833
	I1114 15:16:21.156952  847956 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 15:16:21.156969  847956 round_trippers.go:580]     Content-Type: application/json
	I1114 15:16:21.156976  847956 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b1b4e3f-0738-4767-a496-317b314f0b16
	I1114 15:16:21.156983  847956 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9153d749-f341-4ea1-bf66-22fb1f21fe7a
	I1114 15:16:21.156990  847956 round_trippers.go:580]     Date: Tue, 14 Nov 2023 15:16:21 GMT
	I1114 15:16:21.157468  847956 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1204"},"items":[{"metadata":{"name":"multinode-627820","uid":"07a27cdc-d402-4a86-9891-831ca190fe9c","resourceVersion":"870","creationTimestamp":"2023-11-14T15:02:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-627820","kubernetes.io/os":"linux","minikube.k8s.io/commit":"78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa","minikube.k8s.io/name":"multinode-627820","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T15_02_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15133 chars]
	I1114 15:16:21.158151  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:16:21.158175  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:16:21.158189  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:16:21.158193  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:16:21.158197  847956 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:16:21.158200  847956 node_conditions.go:123] node cpu capacity is 2
	I1114 15:16:21.158204  847956 node_conditions.go:105] duration metric: took 184.310752ms to run NodePressure ...
	I1114 15:16:21.158215  847956 start.go:228] waiting for startup goroutines ...
	I1114 15:16:21.158257  847956 start.go:242] writing updated cluster config ...
	I1114 15:16:21.158548  847956 ssh_runner.go:195] Run: rm -f paused
	I1114 15:16:21.214773  847956 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:16:21.216715  847956 out.go:177] * Done! kubectl is now configured to use "multinode-627820" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:12:05 UTC, ends at Tue 2023-11-14 15:16:22 UTC. --
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.387813771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699974982387799273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=85493571-5fe4-4721-985e-7b1a9e0e4fbb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.388557044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3cf89c4a-0577-4ee5-b3d1-eb7952de2ea9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.388627337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3cf89c4a-0577-4ee5-b3d1-eb7952de2ea9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.388834393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:280ee74232481e3090d39fc5344351d51a6011fdfdc03392c23f8cd2c6f46ec2,PodSandboxId:d8240759fe2bd1b52065cd676e282835c180f446f0b8bc579af5ec3088beb38e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699974792383812263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5d8fa89a42f57e5798f16e217001aadfd403270e45e2ff7697e1bee2a3fa49,PodSandboxId:cd8179b8a50cc69b6efd1b6ee91c8c8967e4411b6c7dda1fdb861e1454104a67,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699974780007512555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nqqlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e733c69a-d862-453f-9b5b-c634e5adc2e8,},Annotations:map[string]string{io.kubernetes.container.hash: c4f6194d,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a11d5221f61a476d0155f9f456ba9efdac87beb8d1ef819e638e68b4b9ce89,PodSandboxId:c9b4be850f2f02ea04b2f9bf3e0b9cdfa0513ee780c2a5f232a6ac5da508185c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699974776549241109,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vh8ng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25afe3b4-014e-4180-9597-fb237d622c81,},Annotations:map[string]string{io.kubernetes.container.hash: d037224b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97aef3d1b4d3ebdf802dd81126626e4cee14ed4956b1e279701e87931f3aca5d,PodSandboxId:13819c51fd7be3124930b86f67ebed213bd149423a3726bf3d323b5d234dbcd9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699974763652781840,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8xnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 457f993f-4895-488a-8277-d5187afda5d3,},Annotations:map[string]string{io.kubernetes.container.hash: a5cef35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953e178a982bc333abb61c5fa8c28bf72bed3421eb4499eec877b79007f1a604,PodSandboxId:d8240759fe2bd1b52065cd676e282835c180f446f0b8bc579af5ec3088beb38e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699974761381021093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d55b8ee3369cb74457c7fcbf5099548007afd6d67de1a399089588e513fbaa,PodSandboxId:9c61b065ee77ce8a0593e4225c44681c188f7b995665a3887d3f4a398d324796,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699974761136788656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m24mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a6d4c8-2f95-4818-bc62-56609946
6b42,},Annotations:map[string]string{io.kubernetes.container.hash: a2d657ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795ee47afdfb45cc742e22a54cda6b49fa9b8d7413ecfe2e5b61d3f0541b1a10,PodSandboxId:ddb2e921375f6595377883b32b22ba66a4502d61524c2097cd1426cfe05c9686,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699974755124895152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e94d5d69871d944e272883491976489,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7a999a98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c95d189cda23d3adcaa51a170c6c359a40bb3c51cda08fff77d205dab80822,PodSandboxId:92524b9da3e5bd4d485d19386ffc3c70ddebf7e74bbdbe653596ef376e084b44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699974754440345386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc53a6a3186a398cdb1e8e8d082916a,},Annotations:map[string]string{io.kubernetes.container.has
h: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51744d43b73227d230ad82aa5e2539a6fe1d2046bd91a11feac3575f0224f747,PodSandboxId:32aefdc230e3620cee8ea24ff16f843001377cdade69788481a0bbd06b59bf4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699974754274286299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b103d6782e9472dc1801b82c4447b3dd,},Annotations:map[string]string{io.
kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a7543d992766ae53658f09cf05230f27ce73776b3f8d0868a5e348ae88cb63,PodSandboxId:7a40c66c93e86487025fd88c8d4f202887eb5c8b3b5641ea05203f49eafa1365,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699974754162345509,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618073575d26c84596a59c7ddac9e2b1,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4aacc9a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3cf89c4a-0577-4ee5-b3d1-eb7952de2ea9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.427558168Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5a2f8969-68bb-40bb-b749-cfa5b698069f name=/runtime.v1.RuntimeService/Version
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.427678461Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5a2f8969-68bb-40bb-b749-cfa5b698069f name=/runtime.v1.RuntimeService/Version
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.429101661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cd118869-9347-40f2-91eb-041097612572 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.429602514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699974982429589632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cd118869-9347-40f2-91eb-041097612572 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.430241669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b5e13392-60ab-4fff-8c5d-abe8f39ecd70 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.430288331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b5e13392-60ab-4fff-8c5d-abe8f39ecd70 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.430949577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:280ee74232481e3090d39fc5344351d51a6011fdfdc03392c23f8cd2c6f46ec2,PodSandboxId:d8240759fe2bd1b52065cd676e282835c180f446f0b8bc579af5ec3088beb38e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699974792383812263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5d8fa89a42f57e5798f16e217001aadfd403270e45e2ff7697e1bee2a3fa49,PodSandboxId:cd8179b8a50cc69b6efd1b6ee91c8c8967e4411b6c7dda1fdb861e1454104a67,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699974780007512555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nqqlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e733c69a-d862-453f-9b5b-c634e5adc2e8,},Annotations:map[string]string{io.kubernetes.container.hash: c4f6194d,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a11d5221f61a476d0155f9f456ba9efdac87beb8d1ef819e638e68b4b9ce89,PodSandboxId:c9b4be850f2f02ea04b2f9bf3e0b9cdfa0513ee780c2a5f232a6ac5da508185c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699974776549241109,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vh8ng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25afe3b4-014e-4180-9597-fb237d622c81,},Annotations:map[string]string{io.kubernetes.container.hash: d037224b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97aef3d1b4d3ebdf802dd81126626e4cee14ed4956b1e279701e87931f3aca5d,PodSandboxId:13819c51fd7be3124930b86f67ebed213bd149423a3726bf3d323b5d234dbcd9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699974763652781840,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8xnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 457f993f-4895-488a-8277-d5187afda5d3,},Annotations:map[string]string{io.kubernetes.container.hash: a5cef35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953e178a982bc333abb61c5fa8c28bf72bed3421eb4499eec877b79007f1a604,PodSandboxId:d8240759fe2bd1b52065cd676e282835c180f446f0b8bc579af5ec3088beb38e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699974761381021093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d55b8ee3369cb74457c7fcbf5099548007afd6d67de1a399089588e513fbaa,PodSandboxId:9c61b065ee77ce8a0593e4225c44681c188f7b995665a3887d3f4a398d324796,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699974761136788656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m24mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a6d4c8-2f95-4818-bc62-56609946
6b42,},Annotations:map[string]string{io.kubernetes.container.hash: a2d657ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795ee47afdfb45cc742e22a54cda6b49fa9b8d7413ecfe2e5b61d3f0541b1a10,PodSandboxId:ddb2e921375f6595377883b32b22ba66a4502d61524c2097cd1426cfe05c9686,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699974755124895152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e94d5d69871d944e272883491976489,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7a999a98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c95d189cda23d3adcaa51a170c6c359a40bb3c51cda08fff77d205dab80822,PodSandboxId:92524b9da3e5bd4d485d19386ffc3c70ddebf7e74bbdbe653596ef376e084b44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699974754440345386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc53a6a3186a398cdb1e8e8d082916a,},Annotations:map[string]string{io.kubernetes.container.has
h: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51744d43b73227d230ad82aa5e2539a6fe1d2046bd91a11feac3575f0224f747,PodSandboxId:32aefdc230e3620cee8ea24ff16f843001377cdade69788481a0bbd06b59bf4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699974754274286299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b103d6782e9472dc1801b82c4447b3dd,},Annotations:map[string]string{io.
kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a7543d992766ae53658f09cf05230f27ce73776b3f8d0868a5e348ae88cb63,PodSandboxId:7a40c66c93e86487025fd88c8d4f202887eb5c8b3b5641ea05203f49eafa1365,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699974754162345509,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618073575d26c84596a59c7ddac9e2b1,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4aacc9a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b5e13392-60ab-4fff-8c5d-abe8f39ecd70 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.479439398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b8b20c1c-c335-4652-a3c7-a532a7843e2b name=/runtime.v1.RuntimeService/Version
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.479498149Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b8b20c1c-c335-4652-a3c7-a532a7843e2b name=/runtime.v1.RuntimeService/Version
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.481270286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8fdd010a-4b7b-443f-9cf3-c70df5bda39e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.481637148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699974982481624798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8fdd010a-4b7b-443f-9cf3-c70df5bda39e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.482329583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5954c685-0ce4-4074-bb35-7b308ddfb5f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.482475172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5954c685-0ce4-4074-bb35-7b308ddfb5f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.482776591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:280ee74232481e3090d39fc5344351d51a6011fdfdc03392c23f8cd2c6f46ec2,PodSandboxId:d8240759fe2bd1b52065cd676e282835c180f446f0b8bc579af5ec3088beb38e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699974792383812263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5d8fa89a42f57e5798f16e217001aadfd403270e45e2ff7697e1bee2a3fa49,PodSandboxId:cd8179b8a50cc69b6efd1b6ee91c8c8967e4411b6c7dda1fdb861e1454104a67,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699974780007512555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nqqlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e733c69a-d862-453f-9b5b-c634e5adc2e8,},Annotations:map[string]string{io.kubernetes.container.hash: c4f6194d,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a11d5221f61a476d0155f9f456ba9efdac87beb8d1ef819e638e68b4b9ce89,PodSandboxId:c9b4be850f2f02ea04b2f9bf3e0b9cdfa0513ee780c2a5f232a6ac5da508185c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699974776549241109,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vh8ng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25afe3b4-014e-4180-9597-fb237d622c81,},Annotations:map[string]string{io.kubernetes.container.hash: d037224b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97aef3d1b4d3ebdf802dd81126626e4cee14ed4956b1e279701e87931f3aca5d,PodSandboxId:13819c51fd7be3124930b86f67ebed213bd149423a3726bf3d323b5d234dbcd9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699974763652781840,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8xnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 457f993f-4895-488a-8277-d5187afda5d3,},Annotations:map[string]string{io.kubernetes.container.hash: a5cef35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953e178a982bc333abb61c5fa8c28bf72bed3421eb4499eec877b79007f1a604,PodSandboxId:d8240759fe2bd1b52065cd676e282835c180f446f0b8bc579af5ec3088beb38e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699974761381021093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d55b8ee3369cb74457c7fcbf5099548007afd6d67de1a399089588e513fbaa,PodSandboxId:9c61b065ee77ce8a0593e4225c44681c188f7b995665a3887d3f4a398d324796,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699974761136788656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m24mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a6d4c8-2f95-4818-bc62-56609946
6b42,},Annotations:map[string]string{io.kubernetes.container.hash: a2d657ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795ee47afdfb45cc742e22a54cda6b49fa9b8d7413ecfe2e5b61d3f0541b1a10,PodSandboxId:ddb2e921375f6595377883b32b22ba66a4502d61524c2097cd1426cfe05c9686,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699974755124895152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e94d5d69871d944e272883491976489,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7a999a98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c95d189cda23d3adcaa51a170c6c359a40bb3c51cda08fff77d205dab80822,PodSandboxId:92524b9da3e5bd4d485d19386ffc3c70ddebf7e74bbdbe653596ef376e084b44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699974754440345386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc53a6a3186a398cdb1e8e8d082916a,},Annotations:map[string]string{io.kubernetes.container.has
h: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51744d43b73227d230ad82aa5e2539a6fe1d2046bd91a11feac3575f0224f747,PodSandboxId:32aefdc230e3620cee8ea24ff16f843001377cdade69788481a0bbd06b59bf4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699974754274286299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b103d6782e9472dc1801b82c4447b3dd,},Annotations:map[string]string{io.
kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a7543d992766ae53658f09cf05230f27ce73776b3f8d0868a5e348ae88cb63,PodSandboxId:7a40c66c93e86487025fd88c8d4f202887eb5c8b3b5641ea05203f49eafa1365,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699974754162345509,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618073575d26c84596a59c7ddac9e2b1,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4aacc9a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5954c685-0ce4-4074-bb35-7b308ddfb5f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.526092747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=07ffd1fe-e0ab-4e6c-90af-0a6b9a834e88 name=/runtime.v1.RuntimeService/Version
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.526242357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=07ffd1fe-e0ab-4e6c-90af-0a6b9a834e88 name=/runtime.v1.RuntimeService/Version
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.527941730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8e8a66d9-2d82-4241-a524-c9bafffba8bd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.528502651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699974982528488568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8e8a66d9-2d82-4241-a524-c9bafffba8bd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.529337282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dfacbc65-07cf-456d-85b4-8152a47ec0e9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.529414852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dfacbc65-07cf-456d-85b4-8152a47ec0e9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:16:22 multinode-627820 crio[711]: time="2023-11-14 15:16:22.529627706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:280ee74232481e3090d39fc5344351d51a6011fdfdc03392c23f8cd2c6f46ec2,PodSandboxId:d8240759fe2bd1b52065cd676e282835c180f446f0b8bc579af5ec3088beb38e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699974792383812263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5d8fa89a42f57e5798f16e217001aadfd403270e45e2ff7697e1bee2a3fa49,PodSandboxId:cd8179b8a50cc69b6efd1b6ee91c8c8967e4411b6c7dda1fdb861e1454104a67,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699974780007512555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nqqlc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e733c69a-d862-453f-9b5b-c634e5adc2e8,},Annotations:map[string]string{io.kubernetes.container.hash: c4f6194d,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a11d5221f61a476d0155f9f456ba9efdac87beb8d1ef819e638e68b4b9ce89,PodSandboxId:c9b4be850f2f02ea04b2f9bf3e0b9cdfa0513ee780c2a5f232a6ac5da508185c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699974776549241109,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vh8ng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25afe3b4-014e-4180-9597-fb237d622c81,},Annotations:map[string]string{io.kubernetes.container.hash: d037224b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97aef3d1b4d3ebdf802dd81126626e4cee14ed4956b1e279701e87931f3aca5d,PodSandboxId:13819c51fd7be3124930b86f67ebed213bd149423a3726bf3d323b5d234dbcd9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699974763652781840,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f8xnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 457f993f-4895-488a-8277-d5187afda5d3,},Annotations:map[string]string{io.kubernetes.container.hash: a5cef35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953e178a982bc333abb61c5fa8c28bf72bed3421eb4499eec877b79007f1a604,PodSandboxId:d8240759fe2bd1b52065cd676e282835c180f446f0b8bc579af5ec3088beb38e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699974761381021093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f9cf343d-66fc-4de5-b0e0-df38ace21868,},Annotations:map[string]string{io.kubernetes.container.hash: 53a38d46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d55b8ee3369cb74457c7fcbf5099548007afd6d67de1a399089588e513fbaa,PodSandboxId:9c61b065ee77ce8a0593e4225c44681c188f7b995665a3887d3f4a398d324796,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699974761136788656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m24mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a6d4c8-2f95-4818-bc62-56609946
6b42,},Annotations:map[string]string{io.kubernetes.container.hash: a2d657ba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795ee47afdfb45cc742e22a54cda6b49fa9b8d7413ecfe2e5b61d3f0541b1a10,PodSandboxId:ddb2e921375f6595377883b32b22ba66a4502d61524c2097cd1426cfe05c9686,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699974755124895152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e94d5d69871d944e272883491976489,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7a999a98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c95d189cda23d3adcaa51a170c6c359a40bb3c51cda08fff77d205dab80822,PodSandboxId:92524b9da3e5bd4d485d19386ffc3c70ddebf7e74bbdbe653596ef376e084b44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699974754440345386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc53a6a3186a398cdb1e8e8d082916a,},Annotations:map[string]string{io.kubernetes.container.has
h: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51744d43b73227d230ad82aa5e2539a6fe1d2046bd91a11feac3575f0224f747,PodSandboxId:32aefdc230e3620cee8ea24ff16f843001377cdade69788481a0bbd06b59bf4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699974754274286299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b103d6782e9472dc1801b82c4447b3dd,},Annotations:map[string]string{io.
kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a7543d992766ae53658f09cf05230f27ce73776b3f8d0868a5e348ae88cb63,PodSandboxId:7a40c66c93e86487025fd88c8d4f202887eb5c8b3b5641ea05203f49eafa1365,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699974754162345509,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-627820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618073575d26c84596a59c7ddac9e2b1,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4aacc9a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dfacbc65-07cf-456d-85b4-8152a47ec0e9 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	280ee74232481       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   d8240759fe2bd       storage-provisioner
	3b5d8fa89a42f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   cd8179b8a50cc       busybox-5bc68d56bd-nqqlc
	50a11d5221f61       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   c9b4be850f2f0       coredns-5dd5756b68-vh8ng
	97aef3d1b4d3e       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   13819c51fd7be       kindnet-f8xnr
	953e178a982bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   d8240759fe2bd       storage-provisioner
	09d55b8ee3369       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      3 minutes ago       Running             kube-proxy                1                   9c61b065ee77c       kube-proxy-m24mc
	795ee47afdfb4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   ddb2e921375f6       etcd-multinode-627820
	55c95d189cda2       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      3 minutes ago       Running             kube-scheduler            1                   92524b9da3e5b       kube-scheduler-multinode-627820
	51744d43b7322       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      3 minutes ago       Running             kube-controller-manager   1                   32aefdc230e36       kube-controller-manager-multinode-627820
	29a7543d99276       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      3 minutes ago       Running             kube-apiserver            1                   7a40c66c93e86       kube-apiserver-multinode-627820
	
	* 
	* ==> coredns [50a11d5221f61a476d0155f9f456ba9efdac87beb8d1ef819e638e68b4b9ce89] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58919 - 60031 "HINFO IN 7318748024942288508.3072621798590264966. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013956244s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-627820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-627820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=multinode-627820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_02_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:02:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-627820
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 15:16:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 15:13:10 +0000   Tue, 14 Nov 2023 15:02:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 15:13:10 +0000   Tue, 14 Nov 2023 15:02:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 15:13:10 +0000   Tue, 14 Nov 2023 15:02:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 15:13:10 +0000   Tue, 14 Nov 2023 15:12:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    multinode-627820
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1aa3e8488b74a5fbd6d2ddab628f96f
	  System UUID:                b1aa3e84-88b7-4a5f-bd6d-2ddab628f96f
	  Boot ID:                    d9a9d2a2-d38f-482c-ade8-e6f11cb4255b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nqqlc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-vh8ng                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-627820                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-f8xnr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-627820             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-627820    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-m24mc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-627820             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-627820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-627820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-627820 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-627820 event: Registered Node multinode-627820 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-627820 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node multinode-627820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node multinode-627820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node multinode-627820 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                  node-controller  Node multinode-627820 event: Registered Node multinode-627820 in Controller
	
	
	Name:               multinode-627820-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-627820-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:14:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-627820-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 15:16:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 15:14:37 +0000   Tue, 14 Nov 2023 15:14:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 15:14:37 +0000   Tue, 14 Nov 2023 15:14:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 15:14:37 +0000   Tue, 14 Nov 2023 15:14:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 15:14:37 +0000   Tue, 14 Nov 2023 15:14:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-627820-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 648f3503d7a5414b908e7718376c46b2
	  System UUID:                648f3503-d7a5-414b-908e-7718376c46b2
	  Boot ID:                    10b0a59d-5bfe-447f-92dc-3527bb3c3488
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-sdq8k    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-2d26z               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-6xg9v            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 103s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node multinode-627820-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node multinode-627820-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node multinode-627820-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet          Node multinode-627820-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m47s                  kubelet          Node multinode-627820-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m12s (x2 over 3m12s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 105s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)    kubelet          Node multinode-627820-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)    kubelet          Node multinode-627820-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)    kubelet          Node multinode-627820-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                   kubelet          Node multinode-627820-m02 status is now: NodeReady
	  Normal   RegisteredNode           100s                   node-controller  Node multinode-627820-m02 event: Registered Node multinode-627820-m02 in Controller
	
	
	Name:               multinode-627820-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-627820-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:16:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-627820-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 15:16:18 +0000   Tue, 14 Nov 2023 15:16:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 15:16:18 +0000   Tue, 14 Nov 2023 15:16:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 15:16:18 +0000   Tue, 14 Nov 2023 15:16:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 15:16:18 +0000   Tue, 14 Nov 2023 15:16:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    multinode-627820-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c5611ada54e45958689a24fa3cfaa72
	  System UUID:                6c5611ad-a54e-4595-8689-a24fa3cfaa72
	  Boot ID:                    558e04e0-4205-4c45-b627-8c387075849c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-p5lnm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-8wr7d               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-4hf2k            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 6s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-627820-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-627820-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-627820-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-627820-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-627820-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-627820-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-627820-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-627820-m03 status is now: NodeReady
	  Normal   NodeNotReady             71s                kubelet     Node multinode-627820-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        39s (x2 over 99s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 4s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  4s (x2 over 4s)    kubelet     Node multinode-627820-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s (x2 over 4s)    kubelet     Node multinode-627820-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s (x2 over 4s)    kubelet     Node multinode-627820-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                 kubelet     Node multinode-627820-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Nov14 15:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067578] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov14 15:12] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.525528] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151769] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.438156] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.448211] systemd-fstab-generator[634]: Ignoring "noauto" for root device
	[  +0.107667] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.135909] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.098813] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.201797] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.138970] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[ +19.284476] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [795ee47afdfb45cc742e22a54cda6b49fa9b8d7413ecfe2e5b61d3f0541b1a10] <==
	* {"level":"info","ts":"2023-11-14T15:12:36.796181Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-14T15:12:36.796227Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T15:12:36.796404Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T15:12:36.796414Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T15:12:36.796636Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.63:2380"}
	{"level":"info","ts":"2023-11-14T15:12:36.796671Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.63:2380"}
	{"level":"info","ts":"2023-11-14T15:12:37.868538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-14T15:12:37.868576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:12:37.868588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b received MsgPreVoteResp from 365d90f3070fcb7b at term 2"}
	{"level":"info","ts":"2023-11-14T15:12:37.868598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became candidate at term 3"}
	{"level":"info","ts":"2023-11-14T15:12:37.868604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b received MsgVoteResp from 365d90f3070fcb7b at term 3"}
	{"level":"info","ts":"2023-11-14T15:12:37.868611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became leader at term 3"}
	{"level":"info","ts":"2023-11-14T15:12:37.868632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 365d90f3070fcb7b elected leader 365d90f3070fcb7b at term 3"}
	{"level":"info","ts":"2023-11-14T15:12:37.874563Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:12:37.874503Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"365d90f3070fcb7b","local-member-attributes":"{Name:multinode-627820 ClientURLs:[https://192.168.39.63:2379]}","request-path":"/0/members/365d90f3070fcb7b/attributes","cluster-id":"4ca65266b0923ae6","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:12:37.87553Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:12:37.875714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.63:2379"}
	{"level":"info","ts":"2023-11-14T15:12:37.876375Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:12:37.876416Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:12:37.876862Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:12:52.025629Z","caller":"traceutil/trace.go:171","msg":"trace[2033860887] transaction","detail":"{read_only:false; response_revision:829; number_of_response:1; }","duration":"118.089951ms","start":"2023-11-14T15:12:51.907473Z","end":"2023-11-14T15:12:52.025563Z","steps":["trace[2033860887] 'process raft request'  (duration: 117.930645ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T15:12:52.025876Z","caller":"traceutil/trace.go:171","msg":"trace[990677769] linearizableReadLoop","detail":"{readStateIndex:894; appliedIndex:894; }","duration":"114.115013ms","start":"2023-11-14T15:12:51.911752Z","end":"2023-11-14T15:12:52.025867Z","steps":["trace[990677769] 'read index received'  (duration: 114.11102ms)","trace[990677769] 'applied index is now lower than readState.Index'  (duration: 3.341µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-14T15:12:52.026059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.30541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-11-14T15:12:52.026331Z","caller":"traceutil/trace.go:171","msg":"trace[760175680] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:829; }","duration":"114.585173ms","start":"2023-11-14T15:12:51.911732Z","end":"2023-11-14T15:12:52.026318Z","steps":["trace[760175680] 'agreement among raft nodes before linearized reading'  (duration: 114.208065ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T15:12:52.029899Z","caller":"traceutil/trace.go:171","msg":"trace[1286707358] transaction","detail":"{read_only:false; response_revision:830; number_of_response:1; }","duration":"118.031301ms","start":"2023-11-14T15:12:51.911858Z","end":"2023-11-14T15:12:52.02989Z","steps":["trace[1286707358] 'process raft request'  (duration: 117.725301ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  15:16:22 up 4 min,  0 users,  load average: 0.39, 0.29, 0.13
	Linux multinode-627820 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [97aef3d1b4d3ebdf802dd81126626e4cee14ed4956b1e279701e87931f3aca5d] <==
	* I1114 15:15:35.274393       1 main.go:250] Node multinode-627820-m03 has CIDR [10.244.3.0/24] 
	I1114 15:15:45.279431       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:15:45.279525       1 main.go:227] handling current node
	I1114 15:15:45.279548       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1114 15:15:45.279566       1 main.go:250] Node multinode-627820-m02 has CIDR [10.244.1.0/24] 
	I1114 15:15:45.279681       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I1114 15:15:45.279702       1 main.go:250] Node multinode-627820-m03 has CIDR [10.244.3.0/24] 
	I1114 15:15:55.290086       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:15:55.290213       1 main.go:227] handling current node
	I1114 15:15:55.290227       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1114 15:15:55.290236       1 main.go:250] Node multinode-627820-m02 has CIDR [10.244.1.0/24] 
	I1114 15:15:55.290331       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I1114 15:15:55.290363       1 main.go:250] Node multinode-627820-m03 has CIDR [10.244.3.0/24] 
	I1114 15:16:05.295832       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:16:05.295883       1 main.go:227] handling current node
	I1114 15:16:05.295895       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1114 15:16:05.295901       1 main.go:250] Node multinode-627820-m02 has CIDR [10.244.1.0/24] 
	I1114 15:16:05.296019       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I1114 15:16:05.296049       1 main.go:250] Node multinode-627820-m03 has CIDR [10.244.3.0/24] 
	I1114 15:16:15.309017       1 main.go:223] Handling node with IPs: map[192.168.39.63:{}]
	I1114 15:16:15.309419       1 main.go:227] handling current node
	I1114 15:16:15.309506       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1114 15:16:15.309536       1 main.go:250] Node multinode-627820-m02 has CIDR [10.244.1.0/24] 
	I1114 15:16:15.309910       1 main.go:223] Handling node with IPs: map[192.168.39.221:{}]
	I1114 15:16:15.309948       1 main.go:250] Node multinode-627820-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [29a7543d992766ae53658f09cf05230f27ce73776b3f8d0868a5e348ae88cb63] <==
	* I1114 15:12:39.209666       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1114 15:12:39.209798       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1114 15:12:39.209907       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1114 15:12:39.374013       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 15:12:39.407998       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1114 15:12:39.408050       1 shared_informer.go:318] Caches are synced for configmaps
	I1114 15:12:39.408081       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 15:12:39.417066       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 15:12:39.417722       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1114 15:12:39.417762       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1114 15:12:39.418308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1114 15:12:39.419053       1 aggregator.go:166] initial CRD sync complete...
	I1114 15:12:39.419100       1 autoregister_controller.go:141] Starting autoregister controller
	I1114 15:12:39.419107       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1114 15:12:39.419114       1 cache.go:39] Caches are synced for autoregister controller
	E1114 15:12:39.430816       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1114 15:12:39.471655       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1114 15:12:40.215060       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1114 15:12:42.174271       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1114 15:12:42.354108       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1114 15:12:42.362738       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1114 15:12:42.437110       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 15:12:42.443703       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1114 15:12:52.075984       1 controller.go:624] quota admission added evaluator for: endpoints
	I1114 15:12:52.079076       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [51744d43b73227d230ad82aa5e2539a6fe1d2046bd91a11feac3575f0224f747] <==
	* I1114 15:14:37.441497       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-rxmbm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-rxmbm"
	I1114 15:14:37.441698       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-627820-m02\" does not exist"
	I1114 15:14:37.450475       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-627820-m02" podCIDRs=["10.244.1.0/24"]
	I1114 15:14:37.578991       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-627820-m02"
	I1114 15:14:38.391835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="104.101µs"
	I1114 15:14:42.060497       1 event.go:307] "Event occurred" object="multinode-627820-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-627820-m02 event: Registered Node multinode-627820-m02 in Controller"
	I1114 15:14:51.605253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.738µs"
	I1114 15:14:52.195870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="92.77µs"
	I1114 15:14:52.199245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="151.545µs"
	I1114 15:15:11.997760       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-627820-m02"
	I1114 15:16:14.637823       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-sdq8k"
	I1114 15:16:14.658572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.277293ms"
	I1114 15:16:14.668829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.002936ms"
	I1114 15:16:14.669004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.367µs"
	I1114 15:16:14.688330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="92.548µs"
	I1114 15:16:16.209736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="141.955µs"
	I1114 15:16:16.482077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.479205ms"
	I1114 15:16:16.482365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.427µs"
	I1114 15:16:17.650357       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-627820-m02"
	I1114 15:16:18.329230       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-627820-m03\" does not exist"
	I1114 15:16:18.330740       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-627820-m02"
	I1114 15:16:18.331742       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-p5lnm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-p5lnm"
	I1114 15:16:18.355489       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-627820-m03" podCIDRs=["10.244.2.0/24"]
	I1114 15:16:18.484941       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-627820-m02"
	I1114 15:16:19.219027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.013µs"
	
	* 
	* ==> kube-proxy [09d55b8ee3369cb74457c7fcbf5099548007afd6d67de1a399089588e513fbaa] <==
	* I1114 15:12:41.514907       1 server_others.go:69] "Using iptables proxy"
	I1114 15:12:41.552373       1 node.go:141] Successfully retrieved node IP: 192.168.39.63
	I1114 15:12:41.850259       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 15:12:41.850304       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 15:12:41.854700       1 server_others.go:152] "Using iptables Proxier"
	I1114 15:12:41.854764       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 15:12:41.854910       1 server.go:846] "Version info" version="v1.28.3"
	I1114 15:12:41.854917       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:12:41.856088       1 config.go:188] "Starting service config controller"
	I1114 15:12:41.856105       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 15:12:41.856196       1 config.go:97] "Starting endpoint slice config controller"
	I1114 15:12:41.856202       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 15:12:41.890353       1 config.go:315] "Starting node config controller"
	I1114 15:12:41.890398       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 15:12:41.956952       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 15:12:41.957026       1 shared_informer.go:318] Caches are synced for service config
	I1114 15:12:41.991071       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [55c95d189cda23d3adcaa51a170c6c359a40bb3c51cda08fff77d205dab80822] <==
	* I1114 15:12:36.663197       1 serving.go:348] Generated self-signed cert in-memory
	W1114 15:12:39.338646       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1114 15:12:39.338769       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 15:12:39.338885       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 15:12:39.338913       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 15:12:39.385769       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1114 15:12:39.385844       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:12:39.387689       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1114 15:12:39.387850       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1114 15:12:39.387887       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 15:12:39.387918       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1114 15:12:39.490632       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:12:05 UTC, ends at Tue 2023-11-14 15:16:23 UTC. --
	Nov 14 15:12:43 multinode-627820 kubelet[919]: E1114 15:12:43.878280     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e733c69a-d862-453f-9b5b-c634e5adc2e8-kube-api-access-xknvd podName:e733c69a-d862-453f-9b5b-c634e5adc2e8 nodeName:}" failed. No retries permitted until 2023-11-14 15:12:47.878266528 +0000 UTC m=+14.973214710 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xknvd" (UniqueName: "kubernetes.io/projected/e733c69a-d862-453f-9b5b-c634e5adc2e8-kube-api-access-xknvd") pod "busybox-5bc68d56bd-nqqlc" (UID: "e733c69a-d862-453f-9b5b-c634e5adc2e8") : object "default"/"kube-root-ca.crt" not registered
	Nov 14 15:12:44 multinode-627820 kubelet[919]: E1114 15:12:44.179703     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-vh8ng" podUID="25afe3b4-014e-4180-9597-fb237d622c81"
	Nov 14 15:12:44 multinode-627820 kubelet[919]: E1114 15:12:44.179901     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-nqqlc" podUID="e733c69a-d862-453f-9b5b-c634e5adc2e8"
	Nov 14 15:12:46 multinode-627820 kubelet[919]: E1114 15:12:46.180485     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-vh8ng" podUID="25afe3b4-014e-4180-9597-fb237d622c81"
	Nov 14 15:12:46 multinode-627820 kubelet[919]: E1114 15:12:46.180627     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-nqqlc" podUID="e733c69a-d862-453f-9b5b-c634e5adc2e8"
	Nov 14 15:12:47 multinode-627820 kubelet[919]: E1114 15:12:47.805605     919 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 14 15:12:47 multinode-627820 kubelet[919]: E1114 15:12:47.805672     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25afe3b4-014e-4180-9597-fb237d622c81-config-volume podName:25afe3b4-014e-4180-9597-fb237d622c81 nodeName:}" failed. No retries permitted until 2023-11-14 15:12:55.805658669 +0000 UTC m=+22.900606852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/25afe3b4-014e-4180-9597-fb237d622c81-config-volume") pod "coredns-5dd5756b68-vh8ng" (UID: "25afe3b4-014e-4180-9597-fb237d622c81") : object "kube-system"/"coredns" not registered
	Nov 14 15:12:47 multinode-627820 kubelet[919]: E1114 15:12:47.906367     919 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 14 15:12:47 multinode-627820 kubelet[919]: E1114 15:12:47.906420     919 projected.go:198] Error preparing data for projected volume kube-api-access-xknvd for pod default/busybox-5bc68d56bd-nqqlc: object "default"/"kube-root-ca.crt" not registered
	Nov 14 15:12:47 multinode-627820 kubelet[919]: E1114 15:12:47.906496     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e733c69a-d862-453f-9b5b-c634e5adc2e8-kube-api-access-xknvd podName:e733c69a-d862-453f-9b5b-c634e5adc2e8 nodeName:}" failed. No retries permitted until 2023-11-14 15:12:55.906482455 +0000 UTC m=+23.001430646 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xknvd" (UniqueName: "kubernetes.io/projected/e733c69a-d862-453f-9b5b-c634e5adc2e8-kube-api-access-xknvd") pod "busybox-5bc68d56bd-nqqlc" (UID: "e733c69a-d862-453f-9b5b-c634e5adc2e8") : object "default"/"kube-root-ca.crt" not registered
	Nov 14 15:12:48 multinode-627820 kubelet[919]: E1114 15:12:48.180082     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-nqqlc" podUID="e733c69a-d862-453f-9b5b-c634e5adc2e8"
	Nov 14 15:12:48 multinode-627820 kubelet[919]: E1114 15:12:48.180445     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-vh8ng" podUID="25afe3b4-014e-4180-9597-fb237d622c81"
	Nov 14 15:13:12 multinode-627820 kubelet[919]: I1114 15:13:12.354231     919 scope.go:117] "RemoveContainer" containerID="953e178a982bc333abb61c5fa8c28bf72bed3421eb4499eec877b79007f1a604"
	Nov 14 15:13:33 multinode-627820 kubelet[919]: E1114 15:13:33.209788     919 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 15:13:33 multinode-627820 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 15:13:33 multinode-627820 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 15:13:33 multinode-627820 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 15:14:33 multinode-627820 kubelet[919]: E1114 15:14:33.212067     919 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 15:14:33 multinode-627820 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 15:14:33 multinode-627820 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 15:14:33 multinode-627820 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 15:15:33 multinode-627820 kubelet[919]: E1114 15:15:33.206314     919 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 15:15:33 multinode-627820 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 15:15:33 multinode-627820 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 15:15:33 multinode-627820 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-627820 -n multinode-627820
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-627820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (690.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 stop
E1114 15:16:27.621097  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:16:34.577016  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-627820 stop: exit status 82 (2m1.691529819s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-627820"  ...
	* Stopping node "multinode-627820"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-627820 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-627820 status: exit status 3 (18.719699409s)

                                                
                                                
-- stdout --
	multinode-627820
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-627820-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:18:46.125176  850782 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.63:22: connect: no route to host
	E1114 15:18:46.125237  850782 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.63:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-627820 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-627820 -n multinode-627820
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-627820 -n multinode-627820: exit status 3 (3.182445964s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:18:49.485205  850869 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.63:22: connect: no route to host
	E1114 15:18:49.485231  850869 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.63:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-627820" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.59s)

                                                
                                    
x
+
TestPreload (278.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-756088 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1114 15:28:52.669119  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-756088 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m17.070970588s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-756088 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-756088 image pull gcr.io/k8s-minikube/busybox: (1.122203721s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-756088
E1114 15:29:30.672904  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-756088: exit status 82 (2m1.093458812s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-756088"  ...
	* Stopping node "test-preload-756088"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-756088 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-11-14 15:31:26.325839133 +0000 UTC m=+3153.856024099
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-756088 -n test-preload-756088
E1114 15:31:27.620903  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:31:34.577682  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-756088 -n test-preload-756088: exit status 3 (18.55719646s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:31:44.877139  853769 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	E1114 15:31:44.877165  853769 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-756088" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-756088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-756088
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-756088: (1.119666685s)
--- FAIL: TestPreload (278.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (144.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3819609471.exe start -p running-upgrade-588399 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3819609471.exe start -p running-upgrade-588399 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m17.23481922s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-588399 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-588399 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (4.203475909s)

                                                
                                                
-- stdout --
	* [running-upgrade-588399] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-588399 in cluster running-upgrade-588399
	* Updating the running kvm2 "running-upgrade-588399" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:36:17.211722  858990 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:36:17.212181  858990 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:36:17.212196  858990 out.go:309] Setting ErrFile to fd 2...
	I1114 15:36:17.212204  858990 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:36:17.212523  858990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:36:17.213394  858990 out.go:303] Setting JSON to false
	I1114 15:36:17.214871  858990 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":44329,"bootTime":1699931848,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:36:17.214970  858990 start.go:138] virtualization: kvm guest
	I1114 15:36:17.217237  858990 out.go:177] * [running-upgrade-588399] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:36:17.219110  858990 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:36:17.219158  858990 notify.go:220] Checking for updates...
	I1114 15:36:17.220809  858990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:36:17.222526  858990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:36:17.223977  858990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:36:17.225401  858990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:36:17.226943  858990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:36:17.229158  858990 config.go:182] Loaded profile config "running-upgrade-588399": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1114 15:36:17.229197  858990 start_flags.go:694] config upgrade: Driver=kvm2
	I1114 15:36:17.229213  858990 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 15:36:17.229314  858990 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/running-upgrade-588399/config.json ...
	I1114 15:36:17.230337  858990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:36:17.230411  858990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:36:17.245764  858990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40377
	I1114 15:36:17.246297  858990 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:36:17.247077  858990 main.go:141] libmachine: Using API Version  1
	I1114 15:36:17.247104  858990 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:36:17.247455  858990 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:36:17.247695  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:17.249988  858990 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1114 15:36:17.251467  858990 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:36:17.251868  858990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:36:17.251908  858990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:36:17.266162  858990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I1114 15:36:17.266621  858990 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:36:17.267067  858990 main.go:141] libmachine: Using API Version  1
	I1114 15:36:17.267091  858990 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:36:17.267410  858990 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:36:17.267581  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:17.307740  858990 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:36:17.309139  858990 start.go:298] selected driver: kvm2
	I1114 15:36:17.309159  858990 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-588399 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.100 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 15:36:17.309305  858990 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:36:17.310215  858990 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.310299  858990 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:36:17.326006  858990 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:36:17.326509  858990 cni.go:84] Creating CNI manager for ""
	I1114 15:36:17.326543  858990 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1114 15:36:17.326555  858990 start_flags.go:323] config:
	{Name:running-upgrade-588399 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.100 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 15:36:17.326761  858990 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.328474  858990 out.go:177] * Starting control plane node running-upgrade-588399 in cluster running-upgrade-588399
	I1114 15:36:17.329777  858990 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1114 15:36:17.351601  858990 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1114 15:36:17.351770  858990 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/running-upgrade-588399/config.json ...
	I1114 15:36:17.351959  858990 cache.go:107] acquiring lock: {Name:mk41312e6737507669890c94984806e0f4211992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.352001  858990 cache.go:107] acquiring lock: {Name:mk108092893ee3ae5922c250de6a7bdf003123e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.352033  858990 cache.go:107] acquiring lock: {Name:mk3974d3bf1eb033bdbe4a2375bc9f34fa70b283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.352109  858990 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1114 15:36:17.352123  858990 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 183.633µs
	I1114 15:36:17.352135  858990 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1114 15:36:17.352122  858990 cache.go:107] acquiring lock: {Name:mk8121bd60c85497bc93f55d6c7bbfcdd721b433 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.352152  858990 cache.go:107] acquiring lock: {Name:mk83fbc712dd450f819157e6d8df2c326774d553 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.352173  858990 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1114 15:36:17.352181  858990 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1114 15:36:17.352149  858990 cache.go:107] acquiring lock: {Name:mk3e719a0548fc8409476463df32b473701c55d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.352254  858990 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1114 15:36:17.352307  858990 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1114 15:36:17.352311  858990 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1114 15:36:17.351967  858990 cache.go:107] acquiring lock: {Name:mk94c6f8d991eef7349b26c55d6f9fcbc5ab578f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.352553  858990 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1114 15:36:17.352536  858990 cache.go:107] acquiring lock: {Name:mk501999aaa32dfba4d0d7672032d1e698b4555b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:36:17.352629  858990 start.go:365] acquiring machines lock for running-upgrade-588399: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:36:17.352703  858990 start.go:369] acquired machines lock for "running-upgrade-588399" in 54.741µs
	I1114 15:36:17.352722  858990 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:36:17.352729  858990 fix.go:54] fixHost starting: minikube
	I1114 15:36:17.352757  858990 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1114 15:36:17.353146  858990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:36:17.353188  858990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:36:17.354149  858990 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1114 15:36:17.354148  858990 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1114 15:36:17.354177  858990 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1114 15:36:17.354192  858990 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1114 15:36:17.354149  858990 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1114 15:36:17.354283  858990 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1114 15:36:17.354706  858990 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1114 15:36:17.371212  858990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I1114 15:36:17.371698  858990 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:36:17.372251  858990 main.go:141] libmachine: Using API Version  1
	I1114 15:36:17.372275  858990 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:36:17.372647  858990 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:36:17.372908  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:17.373088  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetState
	I1114 15:36:17.375218  858990 fix.go:102] recreateIfNeeded on running-upgrade-588399: state=Running err=<nil>
	W1114 15:36:17.375237  858990 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:36:17.378335  858990 out.go:177] * Updating the running kvm2 "running-upgrade-588399" VM ...
	I1114 15:36:17.379706  858990 machine.go:88] provisioning docker machine ...
	I1114 15:36:17.379732  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:17.379985  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetMachineName
	I1114 15:36:17.380256  858990 buildroot.go:166] provisioning hostname "running-upgrade-588399"
	I1114 15:36:17.380282  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetMachineName
	I1114 15:36:17.380440  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:17.383378  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.383871  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:17.383909  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.384072  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHPort
	I1114 15:36:17.384237  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:17.384412  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:17.384561  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHUsername
	I1114 15:36:17.384816  858990 main.go:141] libmachine: Using SSH client type: native
	I1114 15:36:17.385164  858990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1114 15:36:17.385182  858990 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-588399 && echo "running-upgrade-588399" | sudo tee /etc/hostname
	I1114 15:36:17.502109  858990 cache.go:162] opening:  /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1114 15:36:17.513016  858990 cache.go:162] opening:  /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1114 15:36:17.525594  858990 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-588399
	
	I1114 15:36:17.525623  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:17.528705  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.529197  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:17.529244  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.529424  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHPort
	I1114 15:36:17.529657  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:17.529832  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:17.530012  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHUsername
	I1114 15:36:17.530219  858990 main.go:141] libmachine: Using SSH client type: native
	I1114 15:36:17.530701  858990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1114 15:36:17.530730  858990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-588399' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-588399/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-588399' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:36:17.531464  858990 cache.go:162] opening:  /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1114 15:36:17.575380  858990 cache.go:162] opening:  /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1114 15:36:17.579112  858990 cache.go:162] opening:  /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1114 15:36:17.579606  858990 cache.go:157] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1114 15:36:17.579632  858990 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 227.613387ms
	I1114 15:36:17.579643  858990 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1114 15:36:17.585252  858990 cache.go:162] opening:  /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1114 15:36:17.624307  858990 cache.go:162] opening:  /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1114 15:36:17.664957  858990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:36:17.664994  858990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:36:17.665029  858990 buildroot.go:174] setting up certificates
	I1114 15:36:17.665054  858990 provision.go:83] configureAuth start
	I1114 15:36:17.665073  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetMachineName
	I1114 15:36:17.665365  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetIP
	I1114 15:36:17.668959  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.669437  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:17.669466  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.669619  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:17.672885  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.673672  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:17.673714  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.674008  858990 provision.go:138] copyHostCerts
	I1114 15:36:17.674079  858990 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:36:17.674092  858990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:36:17.674146  858990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:36:17.674298  858990 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:36:17.674310  858990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:36:17.674342  858990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:36:17.674430  858990 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:36:17.674439  858990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:36:17.674468  858990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:36:17.674552  858990 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-588399 san=[192.168.50.100 192.168.50.100 localhost 127.0.0.1 minikube running-upgrade-588399]
	I1114 15:36:17.812019  858990 provision.go:172] copyRemoteCerts
	I1114 15:36:17.812121  858990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:36:17.812149  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:17.816620  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.817093  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:17.817125  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:17.817403  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHPort
	I1114 15:36:17.817613  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:17.817819  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHUsername
	I1114 15:36:17.817987  858990 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/running-upgrade-588399/id_rsa Username:docker}
	I1114 15:36:17.922648  858990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:36:17.965591  858990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 15:36:17.990929  858990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:36:18.002939  858990 cache.go:157] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1114 15:36:18.002973  858990 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 650.822171ms
	I1114 15:36:18.002991  858990 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1114 15:36:18.020957  858990 provision.go:86] duration metric: configureAuth took 355.853657ms
	I1114 15:36:18.021259  858990 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:36:18.021470  858990 config.go:182] Loaded profile config "running-upgrade-588399": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1114 15:36:18.021565  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:18.025238  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.025468  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:18.025493  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.025665  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHPort
	I1114 15:36:18.025900  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:18.026577  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:18.027004  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHUsername
	I1114 15:36:18.027178  858990 main.go:141] libmachine: Using SSH client type: native
	I1114 15:36:18.027519  858990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1114 15:36:18.027548  858990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:36:18.452466  858990 cache.go:157] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1114 15:36:18.452500  858990 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.100403798s
	I1114 15:36:18.452519  858990 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1114 15:36:18.657924  858990 cache.go:157] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1114 15:36:18.657956  858990 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.305994843s
	I1114 15:36:18.657969  858990 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1114 15:36:18.673649  858990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:36:18.673708  858990 machine.go:91] provisioned docker machine in 1.293970486s
	I1114 15:36:18.673721  858990 start.go:300] post-start starting for "running-upgrade-588399" (driver="kvm2")
	I1114 15:36:18.673733  858990 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:36:18.673765  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:18.674197  858990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:36:18.674293  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:18.677246  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.677702  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:18.677735  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.677824  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHPort
	I1114 15:36:18.678048  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:18.678338  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHUsername
	I1114 15:36:18.678533  858990 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/running-upgrade-588399/id_rsa Username:docker}
	I1114 15:36:18.778683  858990 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:36:18.784019  858990 info.go:137] Remote host: Buildroot 2019.02.7
	I1114 15:36:18.784048  858990 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:36:18.784119  858990 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:36:18.784224  858990 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:36:18.784338  858990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:36:18.791757  858990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:36:18.815931  858990 start.go:303] post-start completed in 142.19106ms
	I1114 15:36:18.815970  858990 fix.go:56] fixHost completed within 1.463240024s
	I1114 15:36:18.816000  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:18.819848  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.820215  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:18.820249  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.820474  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHPort
	I1114 15:36:18.820884  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:18.821106  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:18.821347  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHUsername
	I1114 15:36:18.821546  858990 main.go:141] libmachine: Using SSH client type: native
	I1114 15:36:18.821924  858990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1114 15:36:18.821939  858990 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1114 15:36:18.958174  858990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699976178.953218239
	
	I1114 15:36:18.958200  858990 fix.go:206] guest clock: 1699976178.953218239
	I1114 15:36:18.958207  858990 fix.go:219] Guest: 2023-11-14 15:36:18.953218239 +0000 UTC Remote: 2023-11-14 15:36:18.815975111 +0000 UTC m=+1.661504758 (delta=137.243128ms)
	I1114 15:36:18.958243  858990 fix.go:190] guest clock delta is within tolerance: 137.243128ms
	I1114 15:36:18.958250  858990 start.go:83] releasing machines lock for "running-upgrade-588399", held for 1.605535957s
	I1114 15:36:18.958276  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:18.958542  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetIP
	I1114 15:36:18.961703  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.962164  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:18.962211  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.962543  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:18.963434  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:18.963660  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .DriverName
	I1114 15:36:18.963774  858990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:36:18.963819  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:18.964033  858990 ssh_runner.go:195] Run: cat /version.json
	I1114 15:36:18.964064  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHHostname
	I1114 15:36:18.967349  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.967786  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHPort
	I1114 15:36:18.967817  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:18.967915  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.967943  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.968059  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:18.968290  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:f8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:34:31 +0000 UTC Type:0 Mac:52:54:00:04:c7:f8 Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:running-upgrade-588399 Clientid:01:52:54:00:04:c7:f8}
	I1114 15:36:18.968322  858990 main.go:141] libmachine: (running-upgrade-588399) DBG | domain running-upgrade-588399 has defined IP address 192.168.50.100 and MAC address 52:54:00:04:c7:f8 in network minikube-net
	I1114 15:36:18.968372  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHUsername
	I1114 15:36:18.968635  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHPort
	I1114 15:36:18.968634  858990 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/running-upgrade-588399/id_rsa Username:docker}
	I1114 15:36:18.968803  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHKeyPath
	I1114 15:36:18.969005  858990 main.go:141] libmachine: (running-upgrade-588399) Calling .GetSSHUsername
	I1114 15:36:18.969148  858990 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/running-upgrade-588399/id_rsa Username:docker}
	I1114 15:36:18.995917  858990 cache.go:157] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1114 15:36:18.995949  858990 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.643947259s
	I1114 15:36:18.995973  858990 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	W1114 15:36:19.062222  858990 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1114 15:36:19.162306  858990 cache.go:157] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1114 15:36:19.162335  858990 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.810232153s
	I1114 15:36:19.162347  858990 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1114 15:36:19.796980  858990 cache.go:157] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1114 15:36:19.797019  858990 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.444521523s
	I1114 15:36:19.797039  858990 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1114 15:36:19.797064  858990 cache.go:87] Successfully saved all images to host disk.
	I1114 15:36:19.797138  858990 ssh_runner.go:195] Run: systemctl --version
	I1114 15:36:19.802648  858990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:36:19.873679  858990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:36:19.879934  858990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:36:19.880029  858990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:36:19.886541  858990 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1114 15:36:19.886572  858990 start.go:472] detecting cgroup driver to use...
	I1114 15:36:19.886652  858990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:36:19.898088  858990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:36:19.908301  858990 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:36:19.908428  858990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:36:19.918512  858990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:36:19.927984  858990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1114 15:36:19.936398  858990 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1114 15:36:19.936462  858990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:36:20.044151  858990 docker.go:219] disabling docker service ...
	I1114 15:36:20.044241  858990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:36:21.063901  858990 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.019625896s)
	I1114 15:36:21.064006  858990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:36:21.077013  858990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:36:21.181277  858990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:36:21.301246  858990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:36:21.313712  858990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:36:21.330004  858990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1114 15:36:21.330082  858990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:36:21.342092  858990 out.go:177] 
	W1114 15:36:21.343654  858990 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1114 15:36:21.343674  858990 out.go:239] * 
	* 
	W1114 15:36:21.344662  858990 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 15:36:21.346150  858990 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-588399 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-14 15:36:21.365691508 +0000 UTC m=+3448.895876484
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-588399 -n running-upgrade-588399
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-588399 -n running-upgrade-588399: exit status 4 (322.044294ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:36:21.627295  859243 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-588399" does not appear in /home/jenkins/minikube-integration/17598-824991/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-588399" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-588399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-588399
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-588399: (1.834424336s)
--- FAIL: TestRunningBinaryUpgrade (144.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (281.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1023288183.exe start -p stopped-upgrade-276452 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1023288183.exe start -p stopped-upgrade-276452 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m15.29056444s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1023288183.exe -p stopped-upgrade-276452 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1023288183.exe -p stopped-upgrade-276452 stop: (1m33.11055081s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-276452 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-276452 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (53.350108487s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-276452] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-276452 in cluster stopped-upgrade-276452
	* Restarting existing kvm2 VM for "stopped-upgrade-276452" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:39:25.342780  861657 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:39:25.343011  861657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:39:25.343025  861657 out.go:309] Setting ErrFile to fd 2...
	I1114 15:39:25.343033  861657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:39:25.343466  861657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:39:25.344366  861657 out.go:303] Setting JSON to false
	I1114 15:39:25.346022  861657 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":44517,"bootTime":1699931848,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:39:25.346153  861657 start.go:138] virtualization: kvm guest
	I1114 15:39:25.349067  861657 out.go:177] * [stopped-upgrade-276452] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:39:25.350975  861657 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:39:25.351017  861657 notify.go:220] Checking for updates...
	I1114 15:39:25.352425  861657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:39:25.353863  861657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:39:25.355760  861657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:39:25.357585  861657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:39:25.359038  861657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:39:25.361098  861657 config.go:182] Loaded profile config "stopped-upgrade-276452": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1114 15:39:25.361123  861657 start_flags.go:694] config upgrade: Driver=kvm2
	I1114 15:39:25.361143  861657 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 15:39:25.361246  861657 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/stopped-upgrade-276452/config.json ...
	I1114 15:39:25.362080  861657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:39:25.362178  861657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:39:25.378212  861657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I1114 15:39:25.378798  861657 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:39:25.379395  861657 main.go:141] libmachine: Using API Version  1
	I1114 15:39:25.379423  861657 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:39:25.379811  861657 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:39:25.379989  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:39:25.381882  861657 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1114 15:39:25.383183  861657 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:39:25.383497  861657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:39:25.383535  861657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:39:25.403827  861657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I1114 15:39:25.404578  861657 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:39:25.405146  861657 main.go:141] libmachine: Using API Version  1
	I1114 15:39:25.405177  861657 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:39:25.405700  861657 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:39:25.405943  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:39:25.449866  861657 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:39:25.451177  861657 start.go:298] selected driver: kvm2
	I1114 15:39:25.451197  861657 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-276452 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.4 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 15:39:25.451317  861657 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:39:25.452312  861657 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.452400  861657 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:39:25.468079  861657 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:39:25.468521  861657 cni.go:84] Creating CNI manager for ""
	I1114 15:39:25.468544  861657 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1114 15:39:25.468554  861657 start_flags.go:323] config:
	{Name:stopped-upgrade-276452 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.4 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 15:39:25.468800  861657 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.470750  861657 out.go:177] * Starting control plane node stopped-upgrade-276452 in cluster stopped-upgrade-276452
	I1114 15:39:25.472195  861657 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1114 15:39:25.501204  861657 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1114 15:39:25.501348  861657 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/stopped-upgrade-276452/config.json ...
	I1114 15:39:25.501575  861657 cache.go:107] acquiring lock: {Name:mk8121bd60c85497bc93f55d6c7bbfcdd721b433 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.501636  861657 cache.go:107] acquiring lock: {Name:mk501999aaa32dfba4d0d7672032d1e698b4555b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.501692  861657 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1114 15:39:25.501707  861657 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 154.14µs
	I1114 15:39:25.501709  861657 start.go:365] acquiring machines lock for stopped-upgrade-276452: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:39:25.501734  861657 cache.go:107] acquiring lock: {Name:mk3974d3bf1eb033bdbe4a2375bc9f34fa70b283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.501776  861657 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1114 15:39:25.501769  861657 cache.go:107] acquiring lock: {Name:mk94c6f8d991eef7349b26c55d6f9fcbc5ab578f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.501790  861657 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 58.249µs
	I1114 15:39:25.501800  861657 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1114 15:39:25.501816  861657 cache.go:107] acquiring lock: {Name:mk3e719a0548fc8409476463df32b473701c55d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.501840  861657 cache.go:107] acquiring lock: {Name:mk83fbc712dd450f819157e6d8df2c326774d553 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.501890  861657 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1114 15:39:25.501901  861657 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 63.512µs
	I1114 15:39:25.501913  861657 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1114 15:39:25.501714  861657 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1114 15:39:25.501928  861657 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 294.691µs
	I1114 15:39:25.501940  861657 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1114 15:39:25.501823  861657 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1114 15:39:25.501953  861657 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 191.526µs
	I1114 15:39:25.501965  861657 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1114 15:39:25.501732  861657 cache.go:107] acquiring lock: {Name:mk108092893ee3ae5922c250de6a7bdf003123e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.501993  861657 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1114 15:39:25.502004  861657 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 274.646µs
	I1114 15:39:25.502015  861657 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1114 15:39:25.501860  861657 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1114 15:39:25.502026  861657 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 211.66µs
	I1114 15:39:25.502038  861657 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1114 15:39:25.501718  861657 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1114 15:39:25.501572  861657 cache.go:107] acquiring lock: {Name:mk41312e6737507669890c94984806e0f4211992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:39:25.502069  861657 cache.go:115] /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1114 15:39:25.502079  861657 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 531.801µs
	I1114 15:39:25.502087  861657 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1114 15:39:25.502105  861657 cache.go:87] Successfully saved all images to host disk.
	I1114 15:39:38.034156  861657 start.go:369] acquired machines lock for "stopped-upgrade-276452" in 12.532404866s
	I1114 15:39:38.034237  861657 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:39:38.034250  861657 fix.go:54] fixHost starting: minikube
	I1114 15:39:38.034623  861657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:39:38.034668  861657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:39:38.051320  861657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I1114 15:39:38.051826  861657 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:39:38.052332  861657 main.go:141] libmachine: Using API Version  1
	I1114 15:39:38.052355  861657 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:39:38.052698  861657 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:39:38.052883  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:39:38.053054  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetState
	I1114 15:39:38.054593  861657 fix.go:102] recreateIfNeeded on stopped-upgrade-276452: state=Stopped err=<nil>
	I1114 15:39:38.054623  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	W1114 15:39:38.054818  861657 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:39:38.056950  861657 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-276452" ...
	I1114 15:39:38.058302  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .Start
	I1114 15:39:38.058509  861657 main.go:141] libmachine: (stopped-upgrade-276452) Ensuring networks are active...
	I1114 15:39:38.059319  861657 main.go:141] libmachine: (stopped-upgrade-276452) Ensuring network default is active
	I1114 15:39:38.059765  861657 main.go:141] libmachine: (stopped-upgrade-276452) Ensuring network minikube-net is active
	I1114 15:39:38.060287  861657 main.go:141] libmachine: (stopped-upgrade-276452) Getting domain xml...
	I1114 15:39:38.061163  861657 main.go:141] libmachine: (stopped-upgrade-276452) Creating domain...
	I1114 15:39:39.439758  861657 main.go:141] libmachine: (stopped-upgrade-276452) Waiting to get IP...
	I1114 15:39:39.441289  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:39.441892  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:39.442123  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:39.441969  861750 retry.go:31] will retry after 229.169852ms: waiting for machine to come up
	I1114 15:39:39.672448  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:39.673071  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:39.673106  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:39.673020  861750 retry.go:31] will retry after 302.578446ms: waiting for machine to come up
	I1114 15:39:39.977908  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:39.978932  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:39.978966  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:39.978859  861750 retry.go:31] will retry after 353.404014ms: waiting for machine to come up
	I1114 15:39:40.333572  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:40.334556  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:40.334574  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:40.334504  861750 retry.go:31] will retry after 599.573971ms: waiting for machine to come up
	I1114 15:39:40.936062  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:40.936578  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:40.936636  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:40.936538  861750 retry.go:31] will retry after 625.691133ms: waiting for machine to come up
	I1114 15:39:41.563503  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:41.564002  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:41.564042  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:41.563953  861750 retry.go:31] will retry after 875.754321ms: waiting for machine to come up
	I1114 15:39:42.441740  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:42.442438  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:42.442492  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:42.442352  861750 retry.go:31] will retry after 793.127166ms: waiting for machine to come up
	I1114 15:39:43.237781  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:43.238338  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:43.238363  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:43.238275  861750 retry.go:31] will retry after 1.092122955s: waiting for machine to come up
	I1114 15:39:44.331800  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:44.332289  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:44.332323  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:44.332256  861750 retry.go:31] will retry after 1.236927669s: waiting for machine to come up
	I1114 15:39:45.570468  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:45.571199  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:45.571233  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:45.571100  861750 retry.go:31] will retry after 1.973761353s: waiting for machine to come up
	I1114 15:39:47.546262  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:47.546876  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:47.546903  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:47.546797  861750 retry.go:31] will retry after 2.731423954s: waiting for machine to come up
	I1114 15:39:50.279848  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:50.280381  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:50.280414  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:50.280315  861750 retry.go:31] will retry after 2.509985525s: waiting for machine to come up
	I1114 15:39:52.792061  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:52.792580  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:52.792614  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:52.792517  861750 retry.go:31] will retry after 4.219636256s: waiting for machine to come up
	I1114 15:39:57.014176  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:39:57.014694  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:39:57.014726  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:39:57.014635  861750 retry.go:31] will retry after 4.276344665s: waiting for machine to come up
	I1114 15:40:01.293782  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:01.294378  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:40:01.294422  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:40:01.294294  861750 retry.go:31] will retry after 5.031109468s: waiting for machine to come up
	I1114 15:40:06.326673  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:06.327349  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | unable to find current IP address of domain stopped-upgrade-276452 in network minikube-net
	I1114 15:40:06.327386  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | I1114 15:40:06.327283  861750 retry.go:31] will retry after 7.357314023s: waiting for machine to come up
	I1114 15:40:13.686567  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.687369  861657 main.go:141] libmachine: (stopped-upgrade-276452) Found IP for machine: 192.168.50.4
	I1114 15:40:13.687432  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has current primary IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.687450  861657 main.go:141] libmachine: (stopped-upgrade-276452) Reserving static IP address...
	I1114 15:40:13.687820  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "stopped-upgrade-276452", mac: "52:54:00:68:7c:f0", ip: "192.168.50.4"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:13.687866  861657 main.go:141] libmachine: (stopped-upgrade-276452) Reserved static IP address: 192.168.50.4
	I1114 15:40:13.687886  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-276452", mac: "52:54:00:68:7c:f0", ip: "192.168.50.4"}
	I1114 15:40:13.687905  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | Getting to WaitForSSH function...
	I1114 15:40:13.687918  861657 main.go:141] libmachine: (stopped-upgrade-276452) Waiting for SSH to be available...
	I1114 15:40:13.690949  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.691369  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:13.691401  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.691696  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | Using SSH client type: external
	I1114 15:40:13.691720  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/stopped-upgrade-276452/id_rsa (-rw-------)
	I1114 15:40:13.691765  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/stopped-upgrade-276452/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:40:13.691777  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | About to run SSH command:
	I1114 15:40:13.691792  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | exit 0
	I1114 15:40:13.824867  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | SSH cmd err, output: <nil>: 
	I1114 15:40:13.825307  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetConfigRaw
	I1114 15:40:13.826070  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetIP
	I1114 15:40:13.829238  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.829684  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:13.829736  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.830048  861657 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/stopped-upgrade-276452/config.json ...
	I1114 15:40:13.830239  861657 machine.go:88] provisioning docker machine ...
	I1114 15:40:13.830264  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:40:13.830584  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetMachineName
	I1114 15:40:13.830799  861657 buildroot.go:166] provisioning hostname "stopped-upgrade-276452"
	I1114 15:40:13.830821  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetMachineName
	I1114 15:40:13.831001  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:13.834111  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.834585  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:13.834616  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.834783  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHPort
	I1114 15:40:13.835028  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:13.835208  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:13.835422  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHUsername
	I1114 15:40:13.835658  861657 main.go:141] libmachine: Using SSH client type: native
	I1114 15:40:13.836089  861657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1114 15:40:13.836105  861657 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-276452 && echo "stopped-upgrade-276452" | sudo tee /etc/hostname
	I1114 15:40:13.960632  861657 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-276452
	
	I1114 15:40:13.960700  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:13.965468  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.965964  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:13.966010  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:13.966321  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHPort
	I1114 15:40:13.966516  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:13.966692  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:13.966907  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHUsername
	I1114 15:40:13.967211  861657 main.go:141] libmachine: Using SSH client type: native
	I1114 15:40:13.967712  861657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1114 15:40:13.967743  861657 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-276452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-276452/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-276452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:40:14.093851  861657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:40:14.093888  861657 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:40:14.093913  861657 buildroot.go:174] setting up certificates
	I1114 15:40:14.093927  861657 provision.go:83] configureAuth start
	I1114 15:40:14.093941  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetMachineName
	I1114 15:40:14.094321  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetIP
	I1114 15:40:14.097629  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:14.098171  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:14.098216  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:14.098418  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:14.101320  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:14.101760  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:14.101805  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:14.102107  861657 provision.go:138] copyHostCerts
	I1114 15:40:14.102174  861657 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:40:14.102188  861657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:40:14.102267  861657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:40:14.102459  861657 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:40:14.102479  861657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:40:14.102525  861657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:40:14.102630  861657 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:40:14.102643  861657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:40:14.102683  861657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:40:14.102775  861657 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-276452 san=[192.168.50.4 192.168.50.4 localhost 127.0.0.1 minikube stopped-upgrade-276452]
	I1114 15:40:14.469024  861657 provision.go:172] copyRemoteCerts
	I1114 15:40:14.469088  861657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:40:14.469116  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:14.472291  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:14.472691  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:14.472726  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:14.473048  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHPort
	I1114 15:40:14.473299  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:14.473464  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHUsername
	I1114 15:40:14.473605  861657 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/stopped-upgrade-276452/id_rsa Username:docker}
	I1114 15:40:14.565814  861657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:40:14.582266  861657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 15:40:14.596219  861657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:40:14.610226  861657 provision.go:86] duration metric: configureAuth took 516.279952ms
	I1114 15:40:14.610263  861657 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:40:14.610483  861657 config.go:182] Loaded profile config "stopped-upgrade-276452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1114 15:40:14.610602  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:14.613751  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:14.614251  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:14.614286  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:14.614477  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHPort
	I1114 15:40:14.614723  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:14.614951  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:14.615178  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHUsername
	I1114 15:40:14.615468  861657 main.go:141] libmachine: Using SSH client type: native
	I1114 15:40:14.615875  861657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1114 15:40:14.615896  861657 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:40:17.706710  861657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:40:17.706743  861657 machine.go:91] provisioned docker machine in 3.876489477s
	I1114 15:40:17.706757  861657 start.go:300] post-start starting for "stopped-upgrade-276452" (driver="kvm2")
	I1114 15:40:17.706772  861657 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:40:17.706794  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:40:17.707160  861657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:40:17.707194  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:17.710577  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.710989  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:17.711020  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.711131  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHPort
	I1114 15:40:17.711335  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:17.711544  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHUsername
	I1114 15:40:17.711727  861657 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/stopped-upgrade-276452/id_rsa Username:docker}
	I1114 15:40:17.799573  861657 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:40:17.804306  861657 info.go:137] Remote host: Buildroot 2019.02.7
	I1114 15:40:17.804332  861657 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:40:17.804403  861657 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:40:17.804509  861657 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:40:17.804623  861657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:40:17.810541  861657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:40:17.824404  861657 start.go:303] post-start completed in 117.632859ms
	I1114 15:40:17.824426  861657 fix.go:56] fixHost completed within 39.790177762s
	I1114 15:40:17.824449  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:17.827675  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.828074  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:17.828105  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.828271  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHPort
	I1114 15:40:17.828499  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:17.828665  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:17.828818  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHUsername
	I1114 15:40:17.828984  861657 main.go:141] libmachine: Using SSH client type: native
	I1114 15:40:17.829315  861657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1114 15:40:17.829327  861657 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1114 15:40:17.949377  861657 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699976417.885653238
	
	I1114 15:40:17.949418  861657 fix.go:206] guest clock: 1699976417.885653238
	I1114 15:40:17.949426  861657 fix.go:219] Guest: 2023-11-14 15:40:17.885653238 +0000 UTC Remote: 2023-11-14 15:40:17.824429911 +0000 UTC m=+52.548581483 (delta=61.223327ms)
	I1114 15:40:17.949464  861657 fix.go:190] guest clock delta is within tolerance: 61.223327ms
	I1114 15:40:17.949474  861657 start.go:83] releasing machines lock for "stopped-upgrade-276452", held for 39.915268324s
	I1114 15:40:17.949499  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:40:17.949792  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetIP
	I1114 15:40:17.952609  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.952963  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:17.953011  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.953193  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:40:17.953774  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:40:17.954012  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .DriverName
	I1114 15:40:17.954125  861657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:40:17.954178  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:17.954292  861657 ssh_runner.go:195] Run: cat /version.json
	I1114 15:40:17.954316  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHHostname
	I1114 15:40:17.956876  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.957115  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.957294  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:17.957321  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.957483  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHPort
	I1114 15:40:17.957659  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:7c:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:04 +0000 UTC Type:0 Mac:52:54:00:68:7c:f0 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:stopped-upgrade-276452 Clientid:01:52:54:00:68:7c:f0}
	I1114 15:40:17.957688  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:17.957700  861657 main.go:141] libmachine: (stopped-upgrade-276452) DBG | domain stopped-upgrade-276452 has defined IP address 192.168.50.4 and MAC address 52:54:00:68:7c:f0 in network minikube-net
	I1114 15:40:17.957827  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHPort
	I1114 15:40:17.957897  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHUsername
	I1114 15:40:17.958019  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHKeyPath
	I1114 15:40:17.958074  861657 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/stopped-upgrade-276452/id_rsa Username:docker}
	I1114 15:40:17.958187  861657 main.go:141] libmachine: (stopped-upgrade-276452) Calling .GetSSHUsername
	I1114 15:40:17.958362  861657 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/stopped-upgrade-276452/id_rsa Username:docker}
	W1114 15:40:18.062765  861657 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1114 15:40:18.062857  861657 ssh_runner.go:195] Run: systemctl --version
	I1114 15:40:18.067146  861657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:40:18.233341  861657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:40:18.239286  861657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:40:18.239357  861657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:40:18.244797  861657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1114 15:40:18.244822  861657 start.go:472] detecting cgroup driver to use...
	I1114 15:40:18.244884  861657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:40:18.255164  861657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:40:18.263494  861657 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:40:18.263554  861657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:40:18.272437  861657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:40:18.280069  861657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1114 15:40:18.288126  861657 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1114 15:40:18.288209  861657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:40:18.384827  861657 docker.go:219] disabling docker service ...
	I1114 15:40:18.384908  861657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:40:18.398454  861657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:40:18.405966  861657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:40:18.491518  861657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:40:18.583158  861657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:40:18.591588  861657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:40:18.602493  861657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1114 15:40:18.602564  861657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:40:18.611778  861657 out.go:177] 
	W1114 15:40:18.613218  861657 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1114 15:40:18.613236  861657 out.go:239] * 
	* 
	W1114 15:40:18.614099  861657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 15:40:18.615296  861657 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-276452 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (281.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (107.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-584924 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-584924 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.458115868s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-584924] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-584924 in cluster pause-584924
	* Updating the running kvm2 "pause-584924" VM ...
	* Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-584924" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:40:29.914249  862303 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:40:29.914441  862303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:40:29.914455  862303 out.go:309] Setting ErrFile to fd 2...
	I1114 15:40:29.914464  862303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:40:29.914794  862303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:40:29.915602  862303 out.go:303] Setting JSON to false
	I1114 15:40:29.917225  862303 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":44582,"bootTime":1699931848,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:40:29.917310  862303 start.go:138] virtualization: kvm guest
	I1114 15:40:29.920024  862303 out.go:177] * [pause-584924] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:40:29.921838  862303 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:40:29.921868  862303 notify.go:220] Checking for updates...
	I1114 15:40:29.923319  862303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:40:29.925023  862303 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:40:29.926509  862303 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:40:29.927993  862303 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:40:29.929924  862303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:40:29.931993  862303 config.go:182] Loaded profile config "pause-584924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:40:29.932634  862303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:40:29.932713  862303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:40:29.948386  862303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1114 15:40:29.948854  862303 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:40:29.949482  862303 main.go:141] libmachine: Using API Version  1
	I1114 15:40:29.949516  862303 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:40:29.949922  862303 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:40:29.950112  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:40:29.950423  862303 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:40:29.950756  862303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:40:29.950804  862303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:40:29.966618  862303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37811
	I1114 15:40:29.967127  862303 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:40:29.967801  862303 main.go:141] libmachine: Using API Version  1
	I1114 15:40:29.967837  862303 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:40:29.968328  862303 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:40:29.968547  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:40:30.005727  862303 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:40:30.007330  862303 start.go:298] selected driver: kvm2
	I1114 15:40:30.007347  862303 start.go:902] validating driver "kvm2" against &{Name:pause-584924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.3 ClusterName:pause-584924 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installe
r:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:40:30.007511  862303 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:40:30.007843  862303 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:40:30.007944  862303 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:40:30.024007  862303 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:40:30.025190  862303 cni.go:84] Creating CNI manager for ""
	I1114 15:40:30.025224  862303 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:40:30.025246  862303 start_flags.go:323] config:
	{Name:pause-584924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-584924 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false por
tainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:40:30.025553  862303 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:40:30.028597  862303 out.go:177] * Starting control plane node pause-584924 in cluster pause-584924
	I1114 15:40:30.030073  862303 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:40:30.030133  862303 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:40:30.030155  862303 cache.go:56] Caching tarball of preloaded images
	I1114 15:40:30.030273  862303 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:40:30.030293  862303 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:40:30.030476  862303 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/config.json ...
	I1114 15:40:30.030741  862303 start.go:365] acquiring machines lock for pause-584924: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:40:54.734261  862303 start.go:369] acquired machines lock for "pause-584924" in 24.703478845s
	I1114 15:40:54.734335  862303 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:40:54.734344  862303 fix.go:54] fixHost starting: 
	I1114 15:40:54.734758  862303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:40:54.734807  862303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:40:54.752407  862303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I1114 15:40:54.752832  862303 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:40:54.753368  862303 main.go:141] libmachine: Using API Version  1
	I1114 15:40:54.753397  862303 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:40:54.753790  862303 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:40:54.754001  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:40:54.754194  862303 main.go:141] libmachine: (pause-584924) Calling .GetState
	I1114 15:40:54.757518  862303 fix.go:102] recreateIfNeeded on pause-584924: state=Running err=<nil>
	W1114 15:40:54.757543  862303 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:40:54.814586  862303 out.go:177] * Updating the running kvm2 "pause-584924" VM ...
	I1114 15:40:54.878560  862303 machine.go:88] provisioning docker machine ...
	I1114 15:40:54.878611  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:40:54.878975  862303 main.go:141] libmachine: (pause-584924) Calling .GetMachineName
	I1114 15:40:54.879156  862303 buildroot.go:166] provisioning hostname "pause-584924"
	I1114 15:40:54.879174  862303 main.go:141] libmachine: (pause-584924) Calling .GetMachineName
	I1114 15:40:54.879376  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:40:54.883147  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:54.883627  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:40:54.883662  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:54.883881  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHPort
	I1114 15:40:54.884127  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:40:54.884294  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:40:54.884490  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHUsername
	I1114 15:40:54.884688  862303 main.go:141] libmachine: Using SSH client type: native
	I1114 15:40:54.887979  862303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1114 15:40:54.888014  862303 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-584924 && echo "pause-584924" | sudo tee /etc/hostname
	I1114 15:40:55.025646  862303 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-584924
	
	I1114 15:40:55.025688  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:40:55.028880  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.029345  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:40:55.029380  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.029529  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHPort
	I1114 15:40:55.029753  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:40:55.029959  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:40:55.030139  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHUsername
	I1114 15:40:55.030314  862303 main.go:141] libmachine: Using SSH client type: native
	I1114 15:40:55.030686  862303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1114 15:40:55.030707  862303 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-584924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-584924/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-584924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:40:55.153516  862303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:40:55.153551  862303 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:40:55.153612  862303 buildroot.go:174] setting up certificates
	I1114 15:40:55.153626  862303 provision.go:83] configureAuth start
	I1114 15:40:55.153644  862303 main.go:141] libmachine: (pause-584924) Calling .GetMachineName
	I1114 15:40:55.154014  862303 main.go:141] libmachine: (pause-584924) Calling .GetIP
	I1114 15:40:55.158182  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.158551  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:40:55.158584  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.158947  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:40:55.161841  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.162155  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:40:55.162205  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.162363  862303 provision.go:138] copyHostCerts
	I1114 15:40:55.162421  862303 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:40:55.162438  862303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:40:55.162490  862303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:40:55.162650  862303 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:40:55.162664  862303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:40:55.162703  862303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:40:55.162786  862303 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:40:55.162797  862303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:40:55.162826  862303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:40:55.162906  862303 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.pause-584924 san=[192.168.39.22 192.168.39.22 localhost 127.0.0.1 minikube pause-584924]
	I1114 15:40:55.369266  862303 provision.go:172] copyRemoteCerts
	I1114 15:40:55.369372  862303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:40:55.369410  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:40:55.372962  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.373355  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:40:55.373408  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.373581  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHPort
	I1114 15:40:55.373915  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:40:55.374142  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHUsername
	I1114 15:40:55.374291  862303 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/pause-584924/id_rsa Username:docker}
	I1114 15:40:55.473033  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:40:55.504953  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1114 15:40:55.544103  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:40:55.574819  862303 provision.go:86] duration metric: configureAuth took 421.176132ms
	I1114 15:40:55.574878  862303 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:40:55.633971  862303 config.go:182] Loaded profile config "pause-584924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:40:55.634190  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:40:55.637483  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.637909  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:40:55.637947  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:40:55.638183  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHPort
	I1114 15:40:55.638431  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:40:55.638615  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:40:55.638785  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHUsername
	I1114 15:40:55.638989  862303 main.go:141] libmachine: Using SSH client type: native
	I1114 15:40:55.639411  862303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1114 15:40:55.639436  862303 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:41:01.366180  862303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:41:01.366211  862303 machine.go:91] provisioned docker machine in 6.487620127s
	I1114 15:41:01.366222  862303 start.go:300] post-start starting for "pause-584924" (driver="kvm2")
	I1114 15:41:01.366233  862303 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:41:01.366250  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:41:01.366603  862303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:41:01.366652  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:41:01.369851  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:01.370433  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:41:01.370554  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:01.370989  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHPort
	I1114 15:41:01.371282  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:41:01.371489  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHUsername
	I1114 15:41:01.371695  862303 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/pause-584924/id_rsa Username:docker}
	I1114 15:41:01.846998  862303 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:41:01.871249  862303 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:41:01.871353  862303 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:41:01.871475  862303 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:41:01.871663  862303 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:41:01.871851  862303 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:41:01.896464  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:41:01.950263  862303 start.go:303] post-start completed in 584.023746ms
	I1114 15:41:01.950291  862303 fix.go:56] fixHost completed within 7.215946511s
	I1114 15:41:01.950317  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:41:01.953684  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:01.954263  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:41:01.954289  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:01.954720  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHPort
	I1114 15:41:01.954957  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:41:01.955188  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:41:01.955368  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHUsername
	I1114 15:41:01.955580  862303 main.go:141] libmachine: Using SSH client type: native
	I1114 15:41:01.956104  862303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1114 15:41:01.956116  862303 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1114 15:41:02.162404  862303 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699976462.158009284
	
	I1114 15:41:02.162510  862303 fix.go:206] guest clock: 1699976462.158009284
	I1114 15:41:02.162537  862303 fix.go:219] Guest: 2023-11-14 15:41:02.158009284 +0000 UTC Remote: 2023-11-14 15:41:01.950295516 +0000 UTC m=+32.095916891 (delta=207.713768ms)
	I1114 15:41:02.162626  862303 fix.go:190] guest clock delta is within tolerance: 207.713768ms
	I1114 15:41:02.162642  862303 start.go:83] releasing machines lock for "pause-584924", held for 7.428337371s
	I1114 15:41:02.162679  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:41:02.163107  862303 main.go:141] libmachine: (pause-584924) Calling .GetIP
	I1114 15:41:02.166398  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:02.166689  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:41:02.166740  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:02.167009  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:41:02.168344  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:41:02.168733  862303 main.go:141] libmachine: (pause-584924) Calling .DriverName
	I1114 15:41:02.168891  862303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:41:02.168962  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:41:02.169079  862303 ssh_runner.go:195] Run: cat /version.json
	I1114 15:41:02.169129  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHHostname
	I1114 15:41:02.173096  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:02.173545  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:02.174008  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:41:02.174060  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:02.174094  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:41:02.174133  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:02.174501  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHPort
	I1114 15:41:02.174562  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHPort
	I1114 15:41:02.174797  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:41:02.174858  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHKeyPath
	I1114 15:41:02.174955  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHUsername
	I1114 15:41:02.175047  862303 main.go:141] libmachine: (pause-584924) Calling .GetSSHUsername
	I1114 15:41:02.175200  862303 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/pause-584924/id_rsa Username:docker}
	I1114 15:41:02.175899  862303 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/pause-584924/id_rsa Username:docker}
	I1114 15:41:02.410783  862303 ssh_runner.go:195] Run: systemctl --version
	I1114 15:41:02.439153  862303 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:41:02.690490  862303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:41:02.710712  862303 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:41:02.710809  862303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:41:02.733081  862303 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1114 15:41:02.733110  862303 start.go:472] detecting cgroup driver to use...
	I1114 15:41:02.733201  862303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:41:02.759554  862303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:41:02.792012  862303 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:41:02.792090  862303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:41:02.820396  862303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:41:02.843969  862303 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:41:03.149406  862303 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:41:03.403638  862303 docker.go:219] disabling docker service ...
	I1114 15:41:03.403748  862303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:41:03.437208  862303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:41:03.472012  862303 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:41:03.744528  862303 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:41:04.125593  862303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:41:04.198638  862303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:41:04.236645  862303 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:41:04.236763  862303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:41:04.253154  862303 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:41:04.253275  862303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:41:04.272330  862303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:41:04.289942  862303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:41:04.308498  862303 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:41:04.333774  862303 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:41:04.349597  862303 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:41:04.363468  862303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:41:04.626466  862303 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:41:06.589921  862303 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.963407533s)
	I1114 15:41:06.589961  862303 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:41:06.590042  862303 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:41:06.597828  862303 start.go:540] Will wait 60s for crictl version
	I1114 15:41:06.597906  862303 ssh_runner.go:195] Run: which crictl
	I1114 15:41:06.604245  862303 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:41:06.901606  862303 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:41:06.901830  862303 ssh_runner.go:195] Run: crio --version
	I1114 15:41:07.185800  862303 ssh_runner.go:195] Run: crio --version
	I1114 15:41:07.328138  862303 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:41:07.329767  862303 main.go:141] libmachine: (pause-584924) Calling .GetIP
	I1114 15:41:07.332785  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:07.333216  862303 main.go:141] libmachine: (pause-584924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:28:a1", ip: ""} in network mk-pause-584924: {Iface:virbr3 ExpiryTime:2023-11-14 16:39:04 +0000 UTC Type:0 Mac:52:54:00:8d:28:a1 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-584924 Clientid:01:52:54:00:8d:28:a1}
	I1114 15:41:07.333250  862303 main.go:141] libmachine: (pause-584924) DBG | domain pause-584924 has defined IP address 192.168.39.22 and MAC address 52:54:00:8d:28:a1 in network mk-pause-584924
	I1114 15:41:07.333559  862303 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:41:07.349892  862303 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:41:07.349972  862303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:41:07.478946  862303 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:41:07.478987  862303 crio.go:415] Images already preloaded, skipping extraction
	I1114 15:41:07.479054  862303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:41:07.569735  862303 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:41:07.569770  862303 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:41:07.569851  862303 ssh_runner.go:195] Run: crio config
	I1114 15:41:07.675630  862303 cni.go:84] Creating CNI manager for ""
	I1114 15:41:07.675720  862303 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:41:07.675754  862303 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:41:07.675792  862303 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-584924 NodeName:pause-584924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:41:07.675974  862303 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-584924"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:41:07.676080  862303 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-584924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:pause-584924 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:41:07.676160  862303 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:41:07.688173  862303 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:41:07.688315  862303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:41:07.708177  862303 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1114 15:41:07.744349  862303 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:41:07.768145  862303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1114 15:41:07.794547  862303 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I1114 15:41:07.808447  862303 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924 for IP: 192.168.39.22
	I1114 15:41:07.808508  862303 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:41:07.808816  862303 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:41:07.808908  862303 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:41:07.809083  862303 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/client.key
	I1114 15:41:07.809235  862303 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/apiserver.key.a67f3e8b
	I1114 15:41:07.809318  862303 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/proxy-client.key
	I1114 15:41:07.809590  862303 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:41:07.809661  862303 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:41:07.809682  862303 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:41:07.809755  862303 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:41:07.809829  862303 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:41:07.809895  862303 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:41:07.809983  862303 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:41:07.811081  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:41:07.876391  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:41:07.927064  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:41:07.972799  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:41:08.025418  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:41:08.099109  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:41:08.138669  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:41:08.190807  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:41:08.250387  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:41:08.312694  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:41:08.389846  862303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:41:08.446565  862303 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:41:08.478224  862303 ssh_runner.go:195] Run: openssl version
	I1114 15:41:08.490878  862303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:41:08.510338  862303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:41:08.522329  862303 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:41:08.522407  862303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:41:08.536380  862303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:41:08.556117  862303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:41:08.581491  862303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:41:08.594363  862303 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:41:08.594452  862303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:41:08.609137  862303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:41:08.630430  862303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:41:08.656476  862303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:41:08.671744  862303 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:41:08.671828  862303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:41:08.687659  862303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:41:08.708845  862303 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:41:08.720553  862303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:41:08.763912  862303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:41:08.782023  862303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:41:08.814807  862303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:41:08.837058  862303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:41:08.855579  862303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:41:08.875585  862303 kubeadm.go:404] StartCluster: {Name:pause-584924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.3 ClusterName:pause-584924 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu
-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:41:08.875755  862303 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:41:08.875828  862303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:41:08.979257  862303 cri.go:89] found id: "5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c"
	I1114 15:41:08.979290  862303 cri.go:89] found id: "647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5"
	I1114 15:41:08.979298  862303 cri.go:89] found id: "c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a"
	I1114 15:41:08.979304  862303 cri.go:89] found id: "f01239ab2115e1c44a857394212968f1113622659a4b4e4dc53411c04380d04a"
	I1114 15:41:08.979312  862303 cri.go:89] found id: "e9ddcd07927662f42b027b3931c4a92011568250bbbce23494673f7e48305caa"
	I1114 15:41:08.979318  862303 cri.go:89] found id: "d14a2db57bcaf6afd3e22e588e9b138f1b531bf7f5dd3debae0a3d3ca656ce74"
	I1114 15:41:08.979324  862303 cri.go:89] found id: ""
	I1114 15:41:08.979378  862303 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-584924 -n pause-584924
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-584924 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-584924 logs -n 25: (4.576831501s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/hosts                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/resolv.conf                                     |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo crictl                        | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | pods                                                 |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo crictl                        | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | ps --all                                             |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo find                          | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo ip a s                        | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	| ssh     | -p flannel-492851 sudo ip r s                        | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | iptables-save                                        |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | iptables -t nat -L -n -v                             |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /run/flannel/subnet.env                              |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo docker                        | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:41:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:41:25.582662  864128 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:41:25.583160  864128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:41:25.583178  864128 out.go:309] Setting ErrFile to fd 2...
	I1114 15:41:25.583187  864128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:41:25.583571  864128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:41:25.584602  864128 out.go:303] Setting JSON to false
	I1114 15:41:25.586634  864128 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":44638,"bootTime":1699931848,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:41:25.586750  864128 start.go:138] virtualization: kvm guest
	I1114 15:41:25.588996  864128 out.go:177] * [bridge-492851] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:41:25.590923  864128 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:41:25.590872  864128 notify.go:220] Checking for updates...
	I1114 15:41:25.592467  864128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:41:25.593937  864128 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:41:25.595313  864128 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:41:25.596785  864128 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:41:25.598319  864128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:41:25.600429  864128 config.go:182] Loaded profile config "enable-default-cni-492851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:41:25.600629  864128 config.go:182] Loaded profile config "flannel-492851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:41:25.600873  864128 config.go:182] Loaded profile config "pause-584924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:41:25.601045  864128 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:41:25.647485  864128 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 15:41:25.648650  864128 start.go:298] selected driver: kvm2
	I1114 15:41:25.648671  864128 start.go:902] validating driver "kvm2" against <nil>
	I1114 15:41:25.648686  864128 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:41:25.649732  864128 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:41:25.649855  864128 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:41:25.666681  864128 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:41:25.666744  864128 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 15:41:25.667033  864128 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:41:25.667121  864128 cni.go:84] Creating CNI manager for "bridge"
	I1114 15:41:25.667138  864128 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1114 15:41:25.667147  864128 start_flags.go:323] config:
	{Name:bridge-492851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:bridge-492851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:41:25.667314  864128 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:41:25.669368  864128 out.go:177] * Starting control plane node bridge-492851 in cluster bridge-492851
	I1114 15:41:25.275698  862124 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:41:25.275721  862124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:41:25.275749  862124 main.go:141] libmachine: (flannel-492851) Calling .GetSSHHostname
	I1114 15:41:25.283103  862124 main.go:141] libmachine: (flannel-492851) DBG | domain flannel-492851 has defined MAC address 52:54:00:cf:23:34 in network mk-flannel-492851
	I1114 15:41:25.283662  862124 main.go:141] libmachine: (flannel-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:23:34", ip: ""} in network mk-flannel-492851: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:37 +0000 UTC Type:0 Mac:52:54:00:cf:23:34 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:flannel-492851 Clientid:01:52:54:00:cf:23:34}
	I1114 15:41:25.283697  862124 main.go:141] libmachine: (flannel-492851) DBG | domain flannel-492851 has defined IP address 192.168.50.114 and MAC address 52:54:00:cf:23:34 in network mk-flannel-492851
	I1114 15:41:25.283943  862124 main.go:141] libmachine: (flannel-492851) Calling .GetSSHPort
	I1114 15:41:25.284190  862124 main.go:141] libmachine: (flannel-492851) Calling .GetSSHKeyPath
	I1114 15:41:25.284432  862124 main.go:141] libmachine: (flannel-492851) Calling .GetSSHUsername
	I1114 15:41:25.284615  862124 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/flannel-492851/id_rsa Username:docker}
	I1114 15:41:25.296861  862124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I1114 15:41:25.300862  862124 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:41:25.301454  862124 main.go:141] libmachine: Using API Version  1
	I1114 15:41:25.301479  862124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:41:25.304839  862124 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:41:25.305446  862124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:41:25.305476  862124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:41:25.327647  862124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I1114 15:41:25.328376  862124 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:41:25.329066  862124 main.go:141] libmachine: Using API Version  1
	I1114 15:41:25.329089  862124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:41:25.329537  862124 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:41:25.329788  862124 main.go:141] libmachine: (flannel-492851) Calling .GetState
	I1114 15:41:25.331745  862124 main.go:141] libmachine: (flannel-492851) Calling .DriverName
	I1114 15:41:25.332058  862124 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:41:25.332073  862124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:41:25.332092  862124 main.go:141] libmachine: (flannel-492851) Calling .GetSSHHostname
	I1114 15:41:25.340868  862124 main.go:141] libmachine: (flannel-492851) DBG | domain flannel-492851 has defined MAC address 52:54:00:cf:23:34 in network mk-flannel-492851
	I1114 15:41:25.340874  862124 main.go:141] libmachine: (flannel-492851) Calling .GetSSHPort
	I1114 15:41:25.340898  862124 main.go:141] libmachine: (flannel-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:23:34", ip: ""} in network mk-flannel-492851: {Iface:virbr2 ExpiryTime:2023-11-14 16:40:37 +0000 UTC Type:0 Mac:52:54:00:cf:23:34 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:flannel-492851 Clientid:01:52:54:00:cf:23:34}
	I1114 15:41:25.340920  862124 main.go:141] libmachine: (flannel-492851) DBG | domain flannel-492851 has defined IP address 192.168.50.114 and MAC address 52:54:00:cf:23:34 in network mk-flannel-492851
	I1114 15:41:25.341223  862124 main.go:141] libmachine: (flannel-492851) Calling .GetSSHKeyPath
	I1114 15:41:25.341432  862124 main.go:141] libmachine: (flannel-492851) Calling .GetSSHUsername
	I1114 15:41:25.341637  862124 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/flannel-492851/id_rsa Username:docker}
	I1114 15:41:25.544637  862124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:41:25.616604  862124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:41:26.161724  862124 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1114 15:41:26.680488  862124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.135798239s)
	I1114 15:41:26.680511  862124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.063872172s)
	I1114 15:41:26.680549  862124 main.go:141] libmachine: Making call to close driver server
	I1114 15:41:26.680553  862124 main.go:141] libmachine: Making call to close driver server
	I1114 15:41:26.680565  862124 main.go:141] libmachine: (flannel-492851) Calling .Close
	I1114 15:41:26.680568  862124 main.go:141] libmachine: (flannel-492851) Calling .Close
	I1114 15:41:26.681016  862124 main.go:141] libmachine: (flannel-492851) DBG | Closing plugin on server side
	I1114 15:41:26.681065  862124 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:41:26.681074  862124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:41:26.681085  862124 main.go:141] libmachine: Making call to close driver server
	I1114 15:41:26.681093  862124 main.go:141] libmachine: (flannel-492851) Calling .Close
	I1114 15:41:26.681441  862124 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:41:26.681452  862124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:41:26.681461  862124 main.go:141] libmachine: Making call to close driver server
	I1114 15:41:26.681470  862124 main.go:141] libmachine: (flannel-492851) Calling .Close
	I1114 15:41:26.681627  862124 main.go:141] libmachine: (flannel-492851) DBG | Closing plugin on server side
	I1114 15:41:26.681657  862124 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:41:26.681665  862124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:41:26.681751  862124 main.go:141] libmachine: (flannel-492851) DBG | Closing plugin on server side
	I1114 15:41:26.681783  862124 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:41:26.681791  862124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:41:26.700482  862124 main.go:141] libmachine: Making call to close driver server
	I1114 15:41:26.700514  862124 main.go:141] libmachine: (flannel-492851) Calling .Close
	I1114 15:41:26.703051  862124 main.go:141] libmachine: (flannel-492851) DBG | Closing plugin on server side
	I1114 15:41:26.703070  862124 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:41:26.703092  862124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:41:26.704793  862124 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1114 15:41:24.040217  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:24.040886  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | unable to find current IP address of domain enable-default-cni-492851 in network mk-enable-default-cni-492851
	I1114 15:41:24.040921  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | I1114 15:41:24.040835  862756 retry.go:31] will retry after 3.130800534s: waiting for machine to come up
	I1114 15:41:27.174622  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:27.175308  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | unable to find current IP address of domain enable-default-cni-492851 in network mk-enable-default-cni-492851
	I1114 15:41:27.175340  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | I1114 15:41:27.175254  862756 retry.go:31] will retry after 3.960947774s: waiting for machine to come up
	I1114 15:41:25.997197  862303 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.252675295s)
	I1114 15:41:25.997236  862303 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:41:26.274615  862303 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:41:26.389350  862303 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:41:26.606408  862303 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:41:26.606494  862303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:41:26.647487  862303 api_server.go:72] duration metric: took 41.080034ms to wait for apiserver process to appear ...
	I1114 15:41:26.647519  862303 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:41:26.647539  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:25.670299  864128 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:41:25.670348  864128 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:41:25.670363  864128 cache.go:56] Caching tarball of preloaded images
	I1114 15:41:25.670461  864128 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:41:25.670481  864128 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:41:25.670629  864128 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/config.json ...
	I1114 15:41:25.670656  864128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/config.json: {Name:mk29cc5a9ef7c4701bcab2e62fc6d1a654987959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:41:25.670815  864128 start.go:365] acquiring machines lock for bridge-492851: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:41:26.706054  862124 addons.go:502] enable addons completed in 1.790912845s: enabled=[storage-provisioner default-storageclass]
	I1114 15:41:27.402928  862124 node_ready.go:58] node "flannel-492851" has status "Ready":"False"
	I1114 15:41:29.403815  862124 node_ready.go:58] node "flannel-492851" has status "Ready":"False"
	I1114 15:41:31.138884  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:31.139457  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | unable to find current IP address of domain enable-default-cni-492851 in network mk-enable-default-cni-492851
	I1114 15:41:31.139482  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | I1114 15:41:31.139409  862756 retry.go:31] will retry after 5.48726921s: waiting for machine to come up
	I1114 15:41:31.648276  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1114 15:41:31.648330  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:31.404509  862124 node_ready.go:58] node "flannel-492851" has status "Ready":"False"
	I1114 15:41:31.901910  862124 node_ready.go:49] node "flannel-492851" has status "Ready":"True"
	I1114 15:41:31.901933  862124 node_ready.go:38] duration metric: took 6.708545121s waiting for node "flannel-492851" to be "Ready" ...
	I1114 15:41:31.901943  862124 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:41:31.908721  862124 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-w67fs" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:33.933287  862124 pod_ready.go:102] pod "coredns-5dd5756b68-w67fs" in "kube-system" namespace has status "Ready":"False"
	I1114 15:41:34.927791  862124 pod_ready.go:92] pod "coredns-5dd5756b68-w67fs" in "kube-system" namespace has status "Ready":"True"
	I1114 15:41:34.927815  862124 pod_ready.go:81] duration metric: took 3.019030288s waiting for pod "coredns-5dd5756b68-w67fs" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:34.927825  862124 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-492851" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:34.933068  862124 pod_ready.go:92] pod "etcd-flannel-492851" in "kube-system" namespace has status "Ready":"True"
	I1114 15:41:34.933086  862124 pod_ready.go:81] duration metric: took 5.255032ms waiting for pod "etcd-flannel-492851" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:34.933094  862124 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-492851" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:34.937619  862124 pod_ready.go:92] pod "kube-apiserver-flannel-492851" in "kube-system" namespace has status "Ready":"True"
	I1114 15:41:34.937637  862124 pod_ready.go:81] duration metric: took 4.537959ms waiting for pod "kube-apiserver-flannel-492851" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:34.937645  862124 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-492851" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:34.942646  862124 pod_ready.go:92] pod "kube-controller-manager-flannel-492851" in "kube-system" namespace has status "Ready":"True"
	I1114 15:41:34.942663  862124 pod_ready.go:81] duration metric: took 5.011514ms waiting for pod "kube-controller-manager-flannel-492851" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:34.942671  862124 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-qkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:35.102956  862124 pod_ready.go:92] pod "kube-proxy-qkj7d" in "kube-system" namespace has status "Ready":"True"
	I1114 15:41:35.102979  862124 pod_ready.go:81] duration metric: took 160.301401ms waiting for pod "kube-proxy-qkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:35.102988  862124 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-492851" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:35.503126  862124 pod_ready.go:92] pod "kube-scheduler-flannel-492851" in "kube-system" namespace has status "Ready":"True"
	I1114 15:41:35.503151  862124 pod_ready.go:81] duration metric: took 400.156426ms waiting for pod "kube-scheduler-flannel-492851" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:35.503162  862124 pod_ready.go:38] duration metric: took 3.601209472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:41:35.503177  862124 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:41:35.503227  862124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:41:35.517374  862124 api_server.go:72] duration metric: took 10.481094665s to wait for apiserver process to appear ...
	I1114 15:41:35.517403  862124 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:41:35.517424  862124 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I1114 15:41:35.522989  862124 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I1114 15:41:35.524141  862124 api_server.go:141] control plane version: v1.28.3
	I1114 15:41:35.524162  862124 api_server.go:131] duration metric: took 6.752541ms to wait for apiserver health ...
	I1114 15:41:35.524169  862124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:41:35.709292  862124 system_pods.go:59] 7 kube-system pods found
	I1114 15:41:35.709325  862124 system_pods.go:61] "coredns-5dd5756b68-w67fs" [17281e02-1fa3-4463-9e02-ec26195bbe26] Running
	I1114 15:41:35.709332  862124 system_pods.go:61] "etcd-flannel-492851" [06732ea1-61c2-4f9a-8c00-8674cddbe118] Running
	I1114 15:41:35.709338  862124 system_pods.go:61] "kube-apiserver-flannel-492851" [39181b53-926f-4dc7-8554-8d0dfc2351d1] Running
	I1114 15:41:35.709344  862124 system_pods.go:61] "kube-controller-manager-flannel-492851" [6f6c10ae-0c84-4af1-8871-83a41e35e708] Running
	I1114 15:41:35.709350  862124 system_pods.go:61] "kube-proxy-qkj7d" [5e979bf7-417b-4a82-9dca-0492701d2276] Running
	I1114 15:41:35.709355  862124 system_pods.go:61] "kube-scheduler-flannel-492851" [ea722169-12f3-4850-957f-611095246d35] Running
	I1114 15:41:35.709363  862124 system_pods.go:61] "storage-provisioner" [73e82e0e-90ee-4408-8aea-4aa4b6af0353] Running
	I1114 15:41:35.709372  862124 system_pods.go:74] duration metric: took 185.19564ms to wait for pod list to return data ...
	I1114 15:41:35.709385  862124 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:41:35.902342  862124 default_sa.go:45] found service account: "default"
	I1114 15:41:35.902376  862124 default_sa.go:55] duration metric: took 192.981807ms for default service account to be created ...
	I1114 15:41:35.902388  862124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:41:36.105721  862124 system_pods.go:86] 7 kube-system pods found
	I1114 15:41:36.105756  862124 system_pods.go:89] "coredns-5dd5756b68-w67fs" [17281e02-1fa3-4463-9e02-ec26195bbe26] Running
	I1114 15:41:36.105762  862124 system_pods.go:89] "etcd-flannel-492851" [06732ea1-61c2-4f9a-8c00-8674cddbe118] Running
	I1114 15:41:36.105769  862124 system_pods.go:89] "kube-apiserver-flannel-492851" [39181b53-926f-4dc7-8554-8d0dfc2351d1] Running
	I1114 15:41:36.105777  862124 system_pods.go:89] "kube-controller-manager-flannel-492851" [6f6c10ae-0c84-4af1-8871-83a41e35e708] Running
	I1114 15:41:36.105784  862124 system_pods.go:89] "kube-proxy-qkj7d" [5e979bf7-417b-4a82-9dca-0492701d2276] Running
	I1114 15:41:36.105791  862124 system_pods.go:89] "kube-scheduler-flannel-492851" [ea722169-12f3-4850-957f-611095246d35] Running
	I1114 15:41:36.105798  862124 system_pods.go:89] "storage-provisioner" [73e82e0e-90ee-4408-8aea-4aa4b6af0353] Running
	I1114 15:41:36.105807  862124 system_pods.go:126] duration metric: took 203.411811ms to wait for k8s-apps to be running ...
	I1114 15:41:36.105822  862124 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:41:36.105880  862124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:41:36.118562  862124 system_svc.go:56] duration metric: took 12.7304ms WaitForService to wait for kubelet.
	I1114 15:41:36.118588  862124 kubeadm.go:581] duration metric: took 11.082316397s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:41:36.118615  862124 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:41:36.304675  862124 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:41:36.304721  862124 node_conditions.go:123] node cpu capacity is 2
	I1114 15:41:36.304752  862124 node_conditions.go:105] duration metric: took 186.116407ms to run NodePressure ...
	I1114 15:41:36.304770  862124 start.go:228] waiting for startup goroutines ...
	I1114 15:41:36.304780  862124 start.go:233] waiting for cluster config update ...
	I1114 15:41:36.304796  862124 start.go:242] writing updated cluster config ...
	I1114 15:41:36.305171  862124 ssh_runner.go:195] Run: rm -f paused
	I1114 15:41:36.354382  862124 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:41:36.356426  862124 out.go:177] * Done! kubectl is now configured to use "flannel-492851" cluster and "default" namespace by default
	I1114 15:41:36.629877  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:36.630349  862734 main.go:141] libmachine: (enable-default-cni-492851) Found IP for machine: 192.168.61.73
	I1114 15:41:36.630383  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has current primary IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:36.630397  862734 main.go:141] libmachine: (enable-default-cni-492851) Reserving static IP address...
	I1114 15:41:36.630791  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-492851", mac: "52:54:00:5f:94:e9", ip: "192.168.61.73"} in network mk-enable-default-cni-492851
	I1114 15:41:36.708139  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | Getting to WaitForSSH function...
	I1114 15:41:36.708176  862734 main.go:141] libmachine: (enable-default-cni-492851) Reserved static IP address: 192.168.61.73
	I1114 15:41:36.708193  862734 main.go:141] libmachine: (enable-default-cni-492851) Waiting for SSH to be available...
	I1114 15:41:36.711261  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:36.711735  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:36.711763  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:36.711885  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | Using SSH client type: external
	I1114 15:41:36.711916  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/enable-default-cni-492851/id_rsa (-rw-------)
	I1114 15:41:36.711961  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/enable-default-cni-492851/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:41:36.711979  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | About to run SSH command:
	I1114 15:41:36.712007  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | exit 0
	I1114 15:41:36.805153  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | SSH cmd err, output: <nil>: 
	I1114 15:41:36.805427  862734 main.go:141] libmachine: (enable-default-cni-492851) KVM machine creation complete!
	I1114 15:41:36.805832  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetConfigRaw
	I1114 15:41:36.806409  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .DriverName
	I1114 15:41:36.806605  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .DriverName
	I1114 15:41:36.806834  862734 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 15:41:36.806849  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetState
	I1114 15:41:36.808246  862734 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 15:41:36.808263  862734 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 15:41:36.808269  862734 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 15:41:36.808276  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:36.810534  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:36.810954  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:36.810998  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:36.811063  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:36.811293  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:36.811441  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:36.811582  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:36.811748  862734 main.go:141] libmachine: Using SSH client type: native
	I1114 15:41:36.812100  862734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I1114 15:41:36.812115  862734 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 15:41:36.935894  862734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:41:36.935922  862734 main.go:141] libmachine: Detecting the provisioner...
	I1114 15:41:36.935931  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:36.938858  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:36.939396  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:36.939423  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:36.939667  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:36.939893  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:36.940117  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:36.940267  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:36.940406  862734 main.go:141] libmachine: Using SSH client type: native
	I1114 15:41:36.940789  862734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I1114 15:41:36.940803  862734 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 15:41:37.057503  862734 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 15:41:37.057594  862734 main.go:141] libmachine: found compatible host: buildroot
	I1114 15:41:37.057605  862734 main.go:141] libmachine: Provisioning with buildroot...
	I1114 15:41:37.057616  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetMachineName
	I1114 15:41:37.057892  862734 buildroot.go:166] provisioning hostname "enable-default-cni-492851"
	I1114 15:41:37.057922  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetMachineName
	I1114 15:41:37.058121  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:37.060911  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.061261  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:37.061293  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.061407  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:37.061626  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:37.061824  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:37.061981  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:37.062133  862734 main.go:141] libmachine: Using SSH client type: native
	I1114 15:41:37.062526  862734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I1114 15:41:37.062548  862734 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-492851 && echo "enable-default-cni-492851" | sudo tee /etc/hostname
	I1114 15:41:37.193799  862734 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-492851
	
	I1114 15:41:37.193838  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:37.196806  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.197174  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:37.197208  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.197359  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:37.197544  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:37.197721  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:37.197841  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:37.198005  862734 main.go:141] libmachine: Using SSH client type: native
	I1114 15:41:37.198333  862734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I1114 15:41:37.198350  862734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-492851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-492851/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-492851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:41:37.325624  862734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:41:37.325666  862734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:41:37.325742  862734 buildroot.go:174] setting up certificates
	I1114 15:41:37.325759  862734 provision.go:83] configureAuth start
	I1114 15:41:37.325780  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetMachineName
	I1114 15:41:37.326177  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetIP
	I1114 15:41:38.205622  864128 start.go:369] acquired machines lock for "bridge-492851" in 12.534777788s
	I1114 15:41:38.205710  864128 start.go:93] Provisioning new machine with config: &{Name:bridge-492851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:bridge-492851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:41:38.206155  864128 start.go:125] createHost starting for "" (driver="kvm2")
	I1114 15:41:37.329167  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.329568  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:37.329603  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.329752  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:37.331908  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.332247  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:37.332272  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.332409  862734 provision.go:138] copyHostCerts
	I1114 15:41:37.332473  862734 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:41:37.332488  862734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:41:37.332547  862734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:41:37.332646  862734 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:41:37.332655  862734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:41:37.332679  862734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:41:37.332822  862734 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:41:37.332836  862734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:41:37.332878  862734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:41:37.332949  862734 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-492851 san=[192.168.61.73 192.168.61.73 localhost 127.0.0.1 minikube enable-default-cni-492851]
	I1114 15:41:37.454959  862734 provision.go:172] copyRemoteCerts
	I1114 15:41:37.455025  862734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:41:37.455053  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:37.457642  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.457953  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:37.457997  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.458183  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:37.458438  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:37.458669  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:37.458838  862734 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/enable-default-cni-492851/id_rsa Username:docker}
	I1114 15:41:37.547047  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:41:37.570122  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1114 15:41:37.594594  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:41:37.616874  862734 provision.go:86] duration metric: configureAuth took 291.091304ms
	I1114 15:41:37.616912  862734 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:41:37.617112  862734 config.go:182] Loaded profile config "enable-default-cni-492851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:41:37.617224  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:37.619976  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.620342  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:37.620391  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.620594  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:37.620788  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:37.620958  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:37.621118  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:37.621273  862734 main.go:141] libmachine: Using SSH client type: native
	I1114 15:41:37.621600  862734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I1114 15:41:37.621616  862734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:41:37.946167  862734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:41:37.946209  862734 main.go:141] libmachine: Checking connection to Docker...
	I1114 15:41:37.946222  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetURL
	I1114 15:41:37.947697  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | Using libvirt version 6000000
	I1114 15:41:37.949787  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.950236  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:37.950276  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.950468  862734 main.go:141] libmachine: Docker is up and running!
	I1114 15:41:37.950491  862734 main.go:141] libmachine: Reticulating splines...
	I1114 15:41:37.950501  862734 client.go:171] LocalClient.Create took 25.527825713s
	I1114 15:41:37.950538  862734 start.go:167] duration metric: libmachine.API.Create for "enable-default-cni-492851" took 25.527904997s
	I1114 15:41:37.950560  862734 start.go:300] post-start starting for "enable-default-cni-492851" (driver="kvm2")
	I1114 15:41:37.950575  862734 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:41:37.950599  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .DriverName
	I1114 15:41:37.950903  862734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:41:37.950936  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:37.953519  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.953900  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:37.953938  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:37.954063  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:37.954265  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:37.954439  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:37.954578  862734 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/enable-default-cni-492851/id_rsa Username:docker}
	I1114 15:41:38.042927  862734 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:41:38.047074  862734 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:41:38.047097  862734 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:41:38.047149  862734 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:41:38.047215  862734 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:41:38.047301  862734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:41:38.056663  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:41:38.079059  862734 start.go:303] post-start completed in 128.47955ms
	I1114 15:41:38.079123  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetConfigRaw
	I1114 15:41:38.079713  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetIP
	I1114 15:41:38.082627  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.083158  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:38.083190  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.083467  862734 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/config.json ...
	I1114 15:41:38.083635  862734 start.go:128] duration metric: createHost completed in 25.679020517s
	I1114 15:41:38.083660  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:38.086146  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.086639  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:38.086742  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.086927  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:38.087159  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:38.087319  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:38.087477  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:38.087697  862734 main.go:141] libmachine: Using SSH client type: native
	I1114 15:41:38.088176  862734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I1114 15:41:38.088275  862734 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:41:38.205418  862734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699976498.182650870
	
	I1114 15:41:38.205448  862734 fix.go:206] guest clock: 1699976498.182650870
	I1114 15:41:38.205457  862734 fix.go:219] Guest: 2023-11-14 15:41:38.18265087 +0000 UTC Remote: 2023-11-14 15:41:38.083646129 +0000 UTC m=+25.816174281 (delta=99.004741ms)
	I1114 15:41:38.205492  862734 fix.go:190] guest clock delta is within tolerance: 99.004741ms
	I1114 15:41:38.205500  862734 start.go:83] releasing machines lock for "enable-default-cni-492851", held for 25.801011353s
	I1114 15:41:38.205539  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .DriverName
	I1114 15:41:38.205821  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetIP
	I1114 15:41:38.209735  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.210062  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:38.210094  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.210304  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .DriverName
	I1114 15:41:38.210893  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .DriverName
	I1114 15:41:38.211120  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .DriverName
	I1114 15:41:38.211230  862734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:41:38.211280  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:38.211395  862734 ssh_runner.go:195] Run: cat /version.json
	I1114 15:41:38.211464  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHHostname
	I1114 15:41:38.214221  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.214381  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.214674  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:38.214708  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.214819  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:38.214837  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:38.214842  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:38.215024  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:38.215044  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHPort
	I1114 15:41:38.215270  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:38.215279  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHKeyPath
	I1114 15:41:38.215461  862734 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/enable-default-cni-492851/id_rsa Username:docker}
	I1114 15:41:38.215498  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetSSHUsername
	I1114 15:41:38.215674  862734 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/enable-default-cni-492851/id_rsa Username:docker}
	I1114 15:41:38.306686  862734 ssh_runner.go:195] Run: systemctl --version
	I1114 15:41:38.331883  862734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:41:38.494275  862734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:41:38.500856  862734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:41:38.500930  862734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:41:38.515371  862734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:41:38.515396  862734 start.go:472] detecting cgroup driver to use...
	I1114 15:41:38.515472  862734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:41:38.531065  862734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:41:38.543386  862734 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:41:38.543467  862734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:41:38.555805  862734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:41:38.569654  862734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:41:38.674552  862734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:41:38.787029  862734 docker.go:219] disabling docker service ...
	I1114 15:41:38.787120  862734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:41:38.799318  862734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:41:38.810914  862734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:41:38.912799  862734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:41:39.012281  862734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:41:39.024151  862734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:41:39.042741  862734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:41:39.042798  862734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:41:39.053053  862734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:41:39.053140  862734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:41:39.062154  862734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:41:39.071393  862734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:41:39.080771  862734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:41:39.090525  862734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:41:39.098513  862734 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:41:39.098566  862734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:41:39.111610  862734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:41:39.119769  862734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:41:39.232019  862734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:41:39.415941  862734 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:41:39.416103  862734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:41:39.421475  862734 start.go:540] Will wait 60s for crictl version
	I1114 15:41:39.421552  862734 ssh_runner.go:195] Run: which crictl
	I1114 15:41:39.425421  862734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:41:39.470908  862734 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:41:39.470999  862734 ssh_runner.go:195] Run: crio --version
	I1114 15:41:39.518514  862734 ssh_runner.go:195] Run: crio --version
	I1114 15:41:39.582199  862734 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:41:36.649155  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1114 15:41:37.149912  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:38.208520  864128 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1114 15:41:38.208715  864128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:41:38.208783  864128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:41:38.228209  864128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I1114 15:41:38.228676  864128 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:41:38.229300  864128 main.go:141] libmachine: Using API Version  1
	I1114 15:41:38.229327  864128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:41:38.229750  864128 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:41:38.229942  864128 main.go:141] libmachine: (bridge-492851) Calling .GetMachineName
	I1114 15:41:38.230121  864128 main.go:141] libmachine: (bridge-492851) Calling .DriverName
	I1114 15:41:38.230287  864128 start.go:159] libmachine.API.Create for "bridge-492851" (driver="kvm2")
	I1114 15:41:38.230326  864128 client.go:168] LocalClient.Create starting
	I1114 15:41:38.230354  864128 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 15:41:38.230385  864128 main.go:141] libmachine: Decoding PEM data...
	I1114 15:41:38.230402  864128 main.go:141] libmachine: Parsing certificate...
	I1114 15:41:38.230477  864128 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 15:41:38.230503  864128 main.go:141] libmachine: Decoding PEM data...
	I1114 15:41:38.230523  864128 main.go:141] libmachine: Parsing certificate...
	I1114 15:41:38.230553  864128 main.go:141] libmachine: Running pre-create checks...
	I1114 15:41:38.230566  864128 main.go:141] libmachine: (bridge-492851) Calling .PreCreateCheck
	I1114 15:41:38.230925  864128 main.go:141] libmachine: (bridge-492851) Calling .GetConfigRaw
	I1114 15:41:38.231343  864128 main.go:141] libmachine: Creating machine...
	I1114 15:41:38.231359  864128 main.go:141] libmachine: (bridge-492851) Calling .Create
	I1114 15:41:38.231512  864128 main.go:141] libmachine: (bridge-492851) Creating KVM machine...
	I1114 15:41:38.232770  864128 main.go:141] libmachine: (bridge-492851) DBG | found existing default KVM network
	I1114 15:41:38.233962  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:38.233794  864240 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:33:ec:5e} reservation:<nil>}
	I1114 15:41:38.234830  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:38.234755  864240 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:63:b2:e3} reservation:<nil>}
	I1114 15:41:38.235866  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:38.235797  864240 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:52:b8} reservation:<nil>}
	I1114 15:41:38.236999  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:38.236915  864240 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000350650}
	I1114 15:41:38.242704  864128 main.go:141] libmachine: (bridge-492851) DBG | trying to create private KVM network mk-bridge-492851 192.168.72.0/24...
	I1114 15:41:38.325582  864128 main.go:141] libmachine: (bridge-492851) DBG | private KVM network mk-bridge-492851 192.168.72.0/24 created
	I1114 15:41:38.325617  864128 main.go:141] libmachine: (bridge-492851) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851 ...
	I1114 15:41:38.325634  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:38.325582  864240 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:41:38.325651  864128 main.go:141] libmachine: (bridge-492851) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 15:41:38.325747  864128 main.go:141] libmachine: (bridge-492851) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 15:41:38.561855  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:38.561685  864240 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/id_rsa...
	I1114 15:41:38.700689  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:38.700535  864240 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/bridge-492851.rawdisk...
	I1114 15:41:38.700724  864128 main.go:141] libmachine: (bridge-492851) DBG | Writing magic tar header
	I1114 15:41:38.700762  864128 main.go:141] libmachine: (bridge-492851) DBG | Writing SSH key tar header
	I1114 15:41:38.700783  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:38.700647  864240 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851 ...
	I1114 15:41:38.700803  864128 main.go:141] libmachine: (bridge-492851) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851
	I1114 15:41:38.700879  864128 main.go:141] libmachine: (bridge-492851) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 15:41:38.700910  864128 main.go:141] libmachine: (bridge-492851) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851 (perms=drwx------)
	I1114 15:41:38.700921  864128 main.go:141] libmachine: (bridge-492851) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:41:38.700944  864128 main.go:141] libmachine: (bridge-492851) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 15:41:38.700955  864128 main.go:141] libmachine: (bridge-492851) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 15:41:38.700969  864128 main.go:141] libmachine: (bridge-492851) DBG | Checking permissions on dir: /home/jenkins
	I1114 15:41:38.700979  864128 main.go:141] libmachine: (bridge-492851) DBG | Checking permissions on dir: /home
	I1114 15:41:38.700994  864128 main.go:141] libmachine: (bridge-492851) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 15:41:38.701012  864128 main.go:141] libmachine: (bridge-492851) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 15:41:38.701032  864128 main.go:141] libmachine: (bridge-492851) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 15:41:38.701040  864128 main.go:141] libmachine: (bridge-492851) DBG | Skipping /home - not owner
	I1114 15:41:38.701055  864128 main.go:141] libmachine: (bridge-492851) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 15:41:38.701071  864128 main.go:141] libmachine: (bridge-492851) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 15:41:38.701081  864128 main.go:141] libmachine: (bridge-492851) Creating domain...
	I1114 15:41:38.702272  864128 main.go:141] libmachine: (bridge-492851) define libvirt domain using xml: 
	I1114 15:41:38.702325  864128 main.go:141] libmachine: (bridge-492851) <domain type='kvm'>
	I1114 15:41:38.702341  864128 main.go:141] libmachine: (bridge-492851)   <name>bridge-492851</name>
	I1114 15:41:38.702362  864128 main.go:141] libmachine: (bridge-492851)   <memory unit='MiB'>3072</memory>
	I1114 15:41:38.702399  864128 main.go:141] libmachine: (bridge-492851)   <vcpu>2</vcpu>
	I1114 15:41:38.702426  864128 main.go:141] libmachine: (bridge-492851)   <features>
	I1114 15:41:38.702437  864128 main.go:141] libmachine: (bridge-492851)     <acpi/>
	I1114 15:41:38.702448  864128 main.go:141] libmachine: (bridge-492851)     <apic/>
	I1114 15:41:38.702459  864128 main.go:141] libmachine: (bridge-492851)     <pae/>
	I1114 15:41:38.702471  864128 main.go:141] libmachine: (bridge-492851)     
	I1114 15:41:38.702480  864128 main.go:141] libmachine: (bridge-492851)   </features>
	I1114 15:41:38.702491  864128 main.go:141] libmachine: (bridge-492851)   <cpu mode='host-passthrough'>
	I1114 15:41:38.702507  864128 main.go:141] libmachine: (bridge-492851)   
	I1114 15:41:38.702521  864128 main.go:141] libmachine: (bridge-492851)   </cpu>
	I1114 15:41:38.702561  864128 main.go:141] libmachine: (bridge-492851)   <os>
	I1114 15:41:38.702608  864128 main.go:141] libmachine: (bridge-492851)     <type>hvm</type>
	I1114 15:41:38.702621  864128 main.go:141] libmachine: (bridge-492851)     <boot dev='cdrom'/>
	I1114 15:41:38.702636  864128 main.go:141] libmachine: (bridge-492851)     <boot dev='hd'/>
	I1114 15:41:38.702649  864128 main.go:141] libmachine: (bridge-492851)     <bootmenu enable='no'/>
	I1114 15:41:38.702659  864128 main.go:141] libmachine: (bridge-492851)   </os>
	I1114 15:41:38.702673  864128 main.go:141] libmachine: (bridge-492851)   <devices>
	I1114 15:41:38.702686  864128 main.go:141] libmachine: (bridge-492851)     <disk type='file' device='cdrom'>
	I1114 15:41:38.702702  864128 main.go:141] libmachine: (bridge-492851)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/boot2docker.iso'/>
	I1114 15:41:38.702717  864128 main.go:141] libmachine: (bridge-492851)       <target dev='hdc' bus='scsi'/>
	I1114 15:41:38.702726  864128 main.go:141] libmachine: (bridge-492851)       <readonly/>
	I1114 15:41:38.702760  864128 main.go:141] libmachine: (bridge-492851)     </disk>
	I1114 15:41:38.702776  864128 main.go:141] libmachine: (bridge-492851)     <disk type='file' device='disk'>
	I1114 15:41:38.702791  864128 main.go:141] libmachine: (bridge-492851)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 15:41:38.702806  864128 main.go:141] libmachine: (bridge-492851)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/bridge-492851.rawdisk'/>
	I1114 15:41:38.702819  864128 main.go:141] libmachine: (bridge-492851)       <target dev='hda' bus='virtio'/>
	I1114 15:41:38.702841  864128 main.go:141] libmachine: (bridge-492851)     </disk>
	I1114 15:41:38.702869  864128 main.go:141] libmachine: (bridge-492851)     <interface type='network'>
	I1114 15:41:38.702885  864128 main.go:141] libmachine: (bridge-492851)       <source network='mk-bridge-492851'/>
	I1114 15:41:38.702900  864128 main.go:141] libmachine: (bridge-492851)       <model type='virtio'/>
	I1114 15:41:38.702910  864128 main.go:141] libmachine: (bridge-492851)     </interface>
	I1114 15:41:38.702920  864128 main.go:141] libmachine: (bridge-492851)     <interface type='network'>
	I1114 15:41:38.702942  864128 main.go:141] libmachine: (bridge-492851)       <source network='default'/>
	I1114 15:41:38.702954  864128 main.go:141] libmachine: (bridge-492851)       <model type='virtio'/>
	I1114 15:41:38.702967  864128 main.go:141] libmachine: (bridge-492851)     </interface>
	I1114 15:41:38.702981  864128 main.go:141] libmachine: (bridge-492851)     <serial type='pty'>
	I1114 15:41:38.702994  864128 main.go:141] libmachine: (bridge-492851)       <target port='0'/>
	I1114 15:41:38.703003  864128 main.go:141] libmachine: (bridge-492851)     </serial>
	I1114 15:41:38.703016  864128 main.go:141] libmachine: (bridge-492851)     <console type='pty'>
	I1114 15:41:38.703029  864128 main.go:141] libmachine: (bridge-492851)       <target type='serial' port='0'/>
	I1114 15:41:38.703039  864128 main.go:141] libmachine: (bridge-492851)     </console>
	I1114 15:41:38.703046  864128 main.go:141] libmachine: (bridge-492851)     <rng model='virtio'>
	I1114 15:41:38.703061  864128 main.go:141] libmachine: (bridge-492851)       <backend model='random'>/dev/random</backend>
	I1114 15:41:38.703074  864128 main.go:141] libmachine: (bridge-492851)     </rng>
	I1114 15:41:38.703086  864128 main.go:141] libmachine: (bridge-492851)     
	I1114 15:41:38.703095  864128 main.go:141] libmachine: (bridge-492851)     
	I1114 15:41:38.703106  864128 main.go:141] libmachine: (bridge-492851)   </devices>
	I1114 15:41:38.703117  864128 main.go:141] libmachine: (bridge-492851) </domain>
	I1114 15:41:38.703129  864128 main.go:141] libmachine: (bridge-492851) 
	I1114 15:41:38.707948  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:eb:25:13 in network default
	I1114 15:41:38.708779  864128 main.go:141] libmachine: (bridge-492851) Ensuring networks are active...
	I1114 15:41:38.708806  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:38.709569  864128 main.go:141] libmachine: (bridge-492851) Ensuring network default is active
	I1114 15:41:38.709956  864128 main.go:141] libmachine: (bridge-492851) Ensuring network mk-bridge-492851 is active
	I1114 15:41:38.710510  864128 main.go:141] libmachine: (bridge-492851) Getting domain xml...
	I1114 15:41:38.711272  864128 main.go:141] libmachine: (bridge-492851) Creating domain...
	I1114 15:41:40.135744  864128 main.go:141] libmachine: (bridge-492851) Waiting to get IP...
	I1114 15:41:40.136863  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:40.137501  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:40.137532  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:40.137476  864240 retry.go:31] will retry after 294.144764ms: waiting for machine to come up
	I1114 15:41:40.433209  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:40.433805  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:40.433829  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:40.433753  864240 retry.go:31] will retry after 270.330805ms: waiting for machine to come up
	I1114 15:41:39.583605  862734 main.go:141] libmachine: (enable-default-cni-492851) Calling .GetIP
	I1114 15:41:39.587018  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:39.587539  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:94:e9", ip: ""} in network mk-enable-default-cni-492851: {Iface:virbr4 ExpiryTime:2023-11-14 16:41:30 +0000 UTC Type:0 Mac:52:54:00:5f:94:e9 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:enable-default-cni-492851 Clientid:01:52:54:00:5f:94:e9}
	I1114 15:41:39.587579  862734 main.go:141] libmachine: (enable-default-cni-492851) DBG | domain enable-default-cni-492851 has defined IP address 192.168.61.73 and MAC address 52:54:00:5f:94:e9 in network mk-enable-default-cni-492851
	I1114 15:41:39.587794  862734 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1114 15:41:39.591850  862734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:41:39.604915  862734 localpath.go:92] copying /home/jenkins/minikube-integration/17598-824991/.minikube/client.crt -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt
	I1114 15:41:39.605067  862734 localpath.go:117] copying /home/jenkins/minikube-integration/17598-824991/.minikube/client.key -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.key
	I1114 15:41:39.605209  862734 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:41:39.605265  862734 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:41:39.638902  862734 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:41:39.638985  862734 ssh_runner.go:195] Run: which lz4
	I1114 15:41:39.642728  862734 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:41:39.646507  862734 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:41:39.646533  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:41:41.533307  862734 crio.go:444] Took 1.890597 seconds to copy over tarball
	I1114 15:41:41.533389  862734 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:41:42.150908  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1114 15:41:42.150969  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:43.878859  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": read tcp 192.168.39.1:36158->192.168.39.22:8443: read: connection reset by peer
	I1114 15:41:43.878942  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:43.879495  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": dial tcp 192.168.39.22:8443: connect: connection refused
	I1114 15:41:44.149900  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:44.150588  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": dial tcp 192.168.39.22:8443: connect: connection refused
	I1114 15:41:44.649929  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:44.650830  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": dial tcp 192.168.39.22:8443: connect: connection refused
	I1114 15:41:40.706446  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:40.707089  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:40.707122  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:40.707045  864240 retry.go:31] will retry after 404.334786ms: waiting for machine to come up
	I1114 15:41:41.112666  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:41.113357  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:41.113388  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:41.113304  864240 retry.go:31] will retry after 586.962768ms: waiting for machine to come up
	I1114 15:41:41.702031  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:41.702591  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:41.702616  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:41.702494  864240 retry.go:31] will retry after 629.426654ms: waiting for machine to come up
	I1114 15:41:42.333990  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:42.334524  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:42.334558  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:42.334471  864240 retry.go:31] will retry after 874.259012ms: waiting for machine to come up
	I1114 15:41:43.209954  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:43.210523  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:43.210555  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:43.210469  864240 retry.go:31] will retry after 1.084561498s: waiting for machine to come up
	I1114 15:41:44.297367  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:44.297945  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:44.297986  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:44.297881  864240 retry.go:31] will retry after 1.317692278s: waiting for machine to come up
	I1114 15:41:44.899237  862734 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.365809586s)
	I1114 15:41:44.899276  862734 crio.go:451] Took 3.365931 seconds to extract the tarball
	I1114 15:41:44.899319  862734 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:41:44.946401  862734 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:41:45.029855  862734 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:41:45.029889  862734 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:41:45.029979  862734 ssh_runner.go:195] Run: crio config
	I1114 15:41:45.107715  862734 cni.go:84] Creating CNI manager for "bridge"
	I1114 15:41:45.107760  862734 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:41:45.107795  862734 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.73 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-492851 NodeName:enable-default-cni-492851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:41:45.108017  862734 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-492851"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:41:45.108135  862734 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=enable-default-cni-492851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:enable-default-cni-492851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I1114 15:41:45.108205  862734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:41:45.120198  862734 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:41:45.120285  862734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:41:45.131598  862734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (384 bytes)
	I1114 15:41:45.152117  862734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:41:45.171058  862734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1114 15:41:45.191673  862734 ssh_runner.go:195] Run: grep 192.168.61.73	control-plane.minikube.internal$ /etc/hosts
	I1114 15:41:45.196021  862734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:41:45.208785  862734 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851 for IP: 192.168.61.73
	I1114 15:41:45.208826  862734 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:41:45.208997  862734 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:41:45.209056  862734 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:41:45.209154  862734 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.key
	I1114 15:41:45.209185  862734 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.key.03699f53
	I1114 15:41:45.209214  862734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.crt.03699f53 with IP's: [192.168.61.73 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 15:41:45.315698  862734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.crt.03699f53 ...
	I1114 15:41:45.315730  862734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.crt.03699f53: {Name:mkcd91b3bc37da06ffd5c424e410511ed116ce59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:41:45.315892  862734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.key.03699f53 ...
	I1114 15:41:45.315909  862734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.key.03699f53: {Name:mk876a5bdb6a309924f59a09f7696f53d32d7437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:41:45.315980  862734 certs.go:337] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.crt.03699f53 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.crt
	I1114 15:41:45.316040  862734 certs.go:341] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.key.03699f53 -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.key
	I1114 15:41:45.316091  862734 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/proxy-client.key
	I1114 15:41:45.316104  862734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/proxy-client.crt with IP's: []
	I1114 15:41:45.459713  862734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/proxy-client.crt ...
	I1114 15:41:45.459746  862734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/proxy-client.crt: {Name:mk6208fb5166ecef736e708b574f257d9d2cc7e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:41:45.464077  862734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/proxy-client.key ...
	I1114 15:41:45.464115  862734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/proxy-client.key: {Name:mk4471e72e1bba8e3934be7329ea12879970017e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:41:45.464368  862734 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:41:45.464417  862734 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:41:45.464434  862734 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:41:45.464467  862734 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:41:45.464504  862734 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:41:45.464534  862734 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:41:45.464594  862734 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:41:45.465414  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:41:45.491867  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:41:45.516208  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:41:45.540864  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:41:45.565061  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:41:45.588561  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:41:45.612076  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:41:45.635411  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:41:45.661481  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:41:45.685333  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:41:45.708366  862734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:41:45.732533  862734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:41:45.752032  862734 ssh_runner.go:195] Run: openssl version
	I1114 15:41:45.759374  862734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:41:45.770506  862734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:41:45.775227  862734 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:41:45.775296  862734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:41:45.781255  862734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:41:45.791335  862734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:41:45.801004  862734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:41:45.806606  862734 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:41:45.806667  862734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:41:45.813921  862734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:41:45.827649  862734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:41:45.837277  862734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:41:45.842318  862734 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:41:45.842390  862734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:41:45.848023  862734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:41:45.857449  862734 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:41:45.861668  862734 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 15:41:45.861725  862734 kubeadm.go:404] StartCluster: {Name:enable-default-cni-492851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.3 ClusterName:enable-default-cni-492851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.73 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:41:45.861798  862734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:41:45.861856  862734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:41:45.901588  862734 cri.go:89] found id: ""
	I1114 15:41:45.901678  862734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:41:45.910598  862734 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:41:45.918923  862734 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:41:45.927313  862734 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:41:45.927363  862734 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:41:45.977974  862734 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:41:45.978146  862734 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:41:46.156070  862734 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:41:46.156196  862734 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:41:46.156298  862734 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:41:46.486314  862734 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:41:46.488560  862734 out.go:204]   - Generating certificates and keys ...
	I1114 15:41:46.488756  862734 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:41:46.488879  862734 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:41:46.873785  862734 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 15:41:47.054876  862734 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 15:41:47.169782  862734 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 15:41:47.367340  862734 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 15:41:47.631421  862734 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 15:41:47.631801  862734 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-492851 localhost] and IPs [192.168.61.73 127.0.0.1 ::1]
	I1114 15:41:47.704478  862734 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 15:41:47.704889  862734 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-492851 localhost] and IPs [192.168.61.73 127.0.0.1 ::1]
	I1114 15:41:47.787019  862734 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 15:41:47.855777  862734 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 15:41:48.000657  862734 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 15:41:48.000974  862734 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:41:48.209057  862734 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:41:48.689771  862734 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:41:48.834739  862734 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:41:49.200183  862734 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:41:49.201079  862734 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:41:49.206266  862734 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:41:45.150276  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:45.150968  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": dial tcp 192.168.39.22:8443: connect: connection refused
	I1114 15:41:45.649490  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:45.650174  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": dial tcp 192.168.39.22:8443: connect: connection refused
	I1114 15:41:46.150374  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:46.151212  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": dial tcp 192.168.39.22:8443: connect: connection refused
	I1114 15:41:46.649752  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:46.650587  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": dial tcp 192.168.39.22:8443: connect: connection refused
	I1114 15:41:47.149961  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:47.150685  862303 api_server.go:269] stopped: https://192.168.39.22:8443/healthz: Get "https://192.168.39.22:8443/healthz": dial tcp 192.168.39.22:8443: connect: connection refused
	I1114 15:41:47.649332  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:45.617048  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:45.714031  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:45.714076  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:45.617644  864240 retry.go:31] will retry after 1.608605727s: waiting for machine to come up
	I1114 15:41:47.227739  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:47.228294  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:47.228319  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:47.228203  864240 retry.go:31] will retry after 1.550308091s: waiting for machine to come up
	I1114 15:41:48.779992  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:48.780431  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:48.780460  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:48.780358  864240 retry.go:31] will retry after 2.451753793s: waiting for machine to come up
	I1114 15:41:49.207984  862734 out.go:204]   - Booting up control plane ...
	I1114 15:41:49.208188  862734 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:41:49.208284  862734 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:41:49.208711  862734 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:41:49.225969  862734 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:41:49.226591  862734 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:41:49.226700  862734 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:41:49.368555  862734 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:41:50.935262  862303 api_server.go:279] https://192.168.39.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:41:50.935301  862303 api_server.go:103] status: https://192.168.39.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:41:50.935318  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:51.026335  862303 api_server.go:279] https://192.168.39.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:41:51.026383  862303 api_server.go:103] status: https://192.168.39.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:41:51.149624  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:51.158326  862303 api_server.go:279] https://192.168.39.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:41:51.158358  862303 api_server.go:103] status: https://192.168.39.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:41:51.649964  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:51.656165  862303 api_server.go:279] https://192.168.39.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:41:51.656200  862303 api_server.go:103] status: https://192.168.39.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:41:52.149477  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:52.177202  862303 api_server.go:279] https://192.168.39.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:41:52.177242  862303 api_server.go:103] status: https://192.168.39.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:41:52.649571  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:41:52.655958  862303 api_server.go:279] https://192.168.39.22:8443/healthz returned 200:
	ok
	I1114 15:41:52.666500  862303 api_server.go:141] control plane version: v1.28.3
	I1114 15:41:52.666542  862303 api_server.go:131] duration metric: took 26.01901436s to wait for apiserver health ...
	I1114 15:41:52.666555  862303 cni.go:84] Creating CNI manager for ""
	I1114 15:41:52.666567  862303 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:41:52.668456  862303 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:41:52.669955  862303 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:41:52.683447  862303 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:41:52.707434  862303 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:41:52.724416  862303 system_pods.go:59] 6 kube-system pods found
	I1114 15:41:52.724461  862303 system_pods.go:61] "coredns-5dd5756b68-jdh5n" [d4909d89-2ca2-450b-8247-3c02fdf3a3b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:41:52.724470  862303 system_pods.go:61] "etcd-pause-584924" [3dedb784-3bab-4fee-80ec-47246d77571f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:41:52.724486  862303 system_pods.go:61] "kube-apiserver-pause-584924" [8100c266-77ba-4ab4-89ca-1a367d24facb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:41:52.724495  862303 system_pods.go:61] "kube-controller-manager-pause-584924" [bfb4ace6-ecaf-4dc6-af4c-96e70d7d4c7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:41:52.724507  862303 system_pods.go:61] "kube-proxy-n97hp" [1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:41:52.724523  862303 system_pods.go:61] "kube-scheduler-pause-584924" [ac04c7da-bfa6-4ae6-97af-ea28af39867c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:41:52.724539  862303 system_pods.go:74] duration metric: took 17.066602ms to wait for pod list to return data ...
	I1114 15:41:52.724554  862303 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:41:52.732557  862303 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:41:52.732592  862303 node_conditions.go:123] node cpu capacity is 2
	I1114 15:41:52.732607  862303 node_conditions.go:105] duration metric: took 8.045873ms to run NodePressure ...
	I1114 15:41:52.732629  862303 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:41:53.063865  862303 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:41:53.069844  862303 kubeadm.go:787] kubelet initialised
	I1114 15:41:53.069873  862303 kubeadm.go:788] duration metric: took 5.977405ms waiting for restarted kubelet to initialise ...
	I1114 15:41:53.069884  862303 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:41:53.077523  862303 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jdh5n" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:53.085939  862303 pod_ready.go:92] pod "coredns-5dd5756b68-jdh5n" in "kube-system" namespace has status "Ready":"True"
	I1114 15:41:53.085969  862303 pod_ready.go:81] duration metric: took 8.400166ms waiting for pod "coredns-5dd5756b68-jdh5n" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:53.085982  862303 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:41:51.234447  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:51.234995  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:51.235042  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:51.234952  864240 retry.go:31] will retry after 2.752565477s: waiting for machine to come up
	I1114 15:41:53.988875  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:53.989275  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:53.989301  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:53.989223  864240 retry.go:31] will retry after 3.500121045s: waiting for machine to come up
	I1114 15:41:58.371196  862734 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004553 seconds
	I1114 15:41:58.371322  862734 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:41:58.386010  862734 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:41:58.919667  862734 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:41:58.919940  862734 kubeadm.go:322] [mark-control-plane] Marking the node enable-default-cni-492851 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:41:59.433928  862734 kubeadm.go:322] [bootstrap-token] Using token: 2qj4h9.ryfgabk4foo500qz
	I1114 15:41:59.435378  862734 out.go:204]   - Configuring RBAC rules ...
	I1114 15:41:59.435534  862734 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:41:59.440823  862734 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:41:59.451038  862734 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:41:59.466573  862734 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:41:59.473402  862734 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:41:59.478760  862734 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:41:59.493945  862734 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:41:59.735729  862734 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:41:59.867459  862734 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:41:59.867519  862734 kubeadm.go:322] 
	I1114 15:41:59.867596  862734 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:41:59.867609  862734 kubeadm.go:322] 
	I1114 15:41:59.867718  862734 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:41:59.867740  862734 kubeadm.go:322] 
	I1114 15:41:59.867775  862734 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:41:59.867845  862734 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:41:59.867918  862734 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:41:59.867927  862734 kubeadm.go:322] 
	I1114 15:41:59.868051  862734 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:41:59.868107  862734 kubeadm.go:322] 
	I1114 15:41:59.868191  862734 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:41:59.868201  862734 kubeadm.go:322] 
	I1114 15:41:59.868271  862734 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:41:59.868374  862734 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:41:59.868491  862734 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:41:59.868508  862734 kubeadm.go:322] 
	I1114 15:41:59.868604  862734 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:41:59.868710  862734 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:41:59.868734  862734 kubeadm.go:322] 
	I1114 15:41:59.868865  862734 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2qj4h9.ryfgabk4foo500qz \
	I1114 15:41:59.869037  862734 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:41:59.869077  862734 kubeadm.go:322] 	--control-plane 
	I1114 15:41:59.869087  862734 kubeadm.go:322] 
	I1114 15:41:59.869205  862734 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:41:59.869219  862734 kubeadm.go:322] 
	I1114 15:41:59.869333  862734 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2qj4h9.ryfgabk4foo500qz \
	I1114 15:41:59.869518  862734 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:41:59.869677  862734 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:41:59.869702  862734 cni.go:84] Creating CNI manager for "bridge"
	I1114 15:41:59.871531  862734 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:41:55.113759  862303 pod_ready.go:102] pod "etcd-pause-584924" in "kube-system" namespace has status "Ready":"False"
	I1114 15:41:57.610598  862303 pod_ready.go:102] pod "etcd-pause-584924" in "kube-system" namespace has status "Ready":"False"
	I1114 15:41:57.493862  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:41:57.494298  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find current IP address of domain bridge-492851 in network mk-bridge-492851
	I1114 15:41:57.494331  864128 main.go:141] libmachine: (bridge-492851) DBG | I1114 15:41:57.494237  864240 retry.go:31] will retry after 5.536278352s: waiting for machine to come up
	I1114 15:41:59.872914  862734 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:41:59.894698  862734 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:41:59.916839  862734 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:41:59.916935  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:41:59.916944  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=enable-default-cni-492851 minikube.k8s.io/updated_at=2023_11_14T15_41_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:00.200401  862734 ops.go:34] apiserver oom_adj: -16
	I1114 15:42:00.200690  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:00.314467  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:00.932924  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:01.433157  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:01.933268  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:00.109537  862303 pod_ready.go:102] pod "etcd-pause-584924" in "kube-system" namespace has status "Ready":"False"
	I1114 15:42:02.109581  862303 pod_ready.go:102] pod "etcd-pause-584924" in "kube-system" namespace has status "Ready":"False"
	I1114 15:42:04.115540  862303 pod_ready.go:102] pod "etcd-pause-584924" in "kube-system" namespace has status "Ready":"False"
	I1114 15:42:03.033909  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.035058  864128 main.go:141] libmachine: (bridge-492851) Found IP for machine: 192.168.72.206
	I1114 15:42:03.035093  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has current primary IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.035102  864128 main.go:141] libmachine: (bridge-492851) Reserving static IP address...
	I1114 15:42:03.035564  864128 main.go:141] libmachine: (bridge-492851) DBG | unable to find host DHCP lease matching {name: "bridge-492851", mac: "52:54:00:8a:93:6b", ip: "192.168.72.206"} in network mk-bridge-492851
	I1114 15:42:03.124558  864128 main.go:141] libmachine: (bridge-492851) DBG | Getting to WaitForSSH function...
	I1114 15:42:03.124592  864128 main.go:141] libmachine: (bridge-492851) Reserved static IP address: 192.168.72.206
	I1114 15:42:03.124600  864128 main.go:141] libmachine: (bridge-492851) Waiting for SSH to be available...
	I1114 15:42:03.127351  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.127763  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:03.127793  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.128106  864128 main.go:141] libmachine: (bridge-492851) DBG | Using SSH client type: external
	I1114 15:42:03.128136  864128 main.go:141] libmachine: (bridge-492851) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/id_rsa (-rw-------)
	I1114 15:42:03.128172  864128 main.go:141] libmachine: (bridge-492851) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:42:03.128190  864128 main.go:141] libmachine: (bridge-492851) DBG | About to run SSH command:
	I1114 15:42:03.128207  864128 main.go:141] libmachine: (bridge-492851) DBG | exit 0
	I1114 15:42:03.237113  864128 main.go:141] libmachine: (bridge-492851) DBG | SSH cmd err, output: <nil>: 
	I1114 15:42:03.237376  864128 main.go:141] libmachine: (bridge-492851) KVM machine creation complete!
	I1114 15:42:03.237783  864128 main.go:141] libmachine: (bridge-492851) Calling .GetConfigRaw
	I1114 15:42:03.238435  864128 main.go:141] libmachine: (bridge-492851) Calling .DriverName
	I1114 15:42:03.238673  864128 main.go:141] libmachine: (bridge-492851) Calling .DriverName
	I1114 15:42:03.238883  864128 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 15:42:03.238901  864128 main.go:141] libmachine: (bridge-492851) Calling .GetState
	I1114 15:42:03.240479  864128 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 15:42:03.240501  864128 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 15:42:03.240511  864128 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 15:42:03.240521  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:03.243751  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.244611  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:03.244635  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.245078  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:03.245275  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.245407  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.245620  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:03.245802  864128 main.go:141] libmachine: Using SSH client type: native
	I1114 15:42:03.246195  864128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I1114 15:42:03.246209  864128 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 15:42:03.368325  864128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:42:03.368367  864128 main.go:141] libmachine: Detecting the provisioner...
	I1114 15:42:03.368381  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:03.371616  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.372043  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:03.372076  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.372200  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:03.372450  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.372673  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.372861  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:03.373098  864128 main.go:141] libmachine: Using SSH client type: native
	I1114 15:42:03.373517  864128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I1114 15:42:03.373535  864128 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 15:42:03.498441  864128 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 15:42:03.498536  864128 main.go:141] libmachine: found compatible host: buildroot
	I1114 15:42:03.498548  864128 main.go:141] libmachine: Provisioning with buildroot...
	I1114 15:42:03.498560  864128 main.go:141] libmachine: (bridge-492851) Calling .GetMachineName
	I1114 15:42:03.498887  864128 buildroot.go:166] provisioning hostname "bridge-492851"
	I1114 15:42:03.498913  864128 main.go:141] libmachine: (bridge-492851) Calling .GetMachineName
	I1114 15:42:03.499100  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:03.502372  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.502817  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:03.502847  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.502986  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:03.503235  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.503442  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.503614  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:03.503812  864128 main.go:141] libmachine: Using SSH client type: native
	I1114 15:42:03.504369  864128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I1114 15:42:03.504406  864128 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-492851 && echo "bridge-492851" | sudo tee /etc/hostname
	I1114 15:42:03.656646  864128 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-492851
	
	I1114 15:42:03.656677  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:03.660043  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.660440  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:03.660474  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.660678  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:03.660939  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.661172  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.661359  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:03.661526  864128 main.go:141] libmachine: Using SSH client type: native
	I1114 15:42:03.661934  864128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I1114 15:42:03.661962  864128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-492851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-492851/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-492851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:42:03.794209  864128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:42:03.794243  864128 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:42:03.794290  864128 buildroot.go:174] setting up certificates
	I1114 15:42:03.794306  864128 provision.go:83] configureAuth start
	I1114 15:42:03.794325  864128 main.go:141] libmachine: (bridge-492851) Calling .GetMachineName
	I1114 15:42:03.794631  864128 main.go:141] libmachine: (bridge-492851) Calling .GetIP
	I1114 15:42:03.798193  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.798659  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:03.798692  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.798857  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:03.801521  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.801891  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:03.801921  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.802089  864128 provision.go:138] copyHostCerts
	I1114 15:42:03.802151  864128 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:42:03.802174  864128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:42:03.802251  864128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:42:03.802397  864128 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:42:03.802412  864128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:42:03.802449  864128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:42:03.802545  864128 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:42:03.802557  864128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:42:03.802591  864128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:42:03.802676  864128 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.bridge-492851 san=[192.168.72.206 192.168.72.206 localhost 127.0.0.1 minikube bridge-492851]
	I1114 15:42:03.891957  864128 provision.go:172] copyRemoteCerts
	I1114 15:42:03.892006  864128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:42:03.892026  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:03.894685  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.895104  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:03.895141  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:03.895300  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:03.895481  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:03.895631  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:03.895761  864128 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/id_rsa Username:docker}
	I1114 15:42:03.990348  864128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1114 15:42:04.026482  864128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:42:04.058176  864128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:42:04.087904  864128 provision.go:86] duration metric: configureAuth took 293.581991ms
	I1114 15:42:04.087935  864128 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:42:04.088120  864128 config.go:182] Loaded profile config "bridge-492851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:42:04.088188  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:04.091158  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.091710  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:04.091741  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.091922  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:04.092120  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:04.092345  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:04.092604  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:04.092840  864128 main.go:141] libmachine: Using SSH client type: native
	I1114 15:42:04.093269  864128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I1114 15:42:04.093289  864128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:42:04.440840  864128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:42:04.440869  864128 main.go:141] libmachine: Checking connection to Docker...
	I1114 15:42:04.440882  864128 main.go:141] libmachine: (bridge-492851) Calling .GetURL
	I1114 15:42:04.442244  864128 main.go:141] libmachine: (bridge-492851) DBG | Using libvirt version 6000000
	I1114 15:42:04.445254  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.445670  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:04.445725  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.445900  864128 main.go:141] libmachine: Docker is up and running!
	I1114 15:42:04.445921  864128 main.go:141] libmachine: Reticulating splines...
	I1114 15:42:04.445931  864128 client.go:171] LocalClient.Create took 26.215594675s
	I1114 15:42:04.445967  864128 start.go:167] duration metric: libmachine.API.Create for "bridge-492851" took 26.215672381s
	I1114 15:42:04.445980  864128 start.go:300] post-start starting for "bridge-492851" (driver="kvm2")
	I1114 15:42:04.445993  864128 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:42:04.446016  864128 main.go:141] libmachine: (bridge-492851) Calling .DriverName
	I1114 15:42:04.446294  864128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:42:04.446333  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:04.449025  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.449411  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:04.449435  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.449624  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:04.449825  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:04.450086  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:04.450281  864128 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/id_rsa Username:docker}
	I1114 15:42:04.548913  864128 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:42:04.553569  864128 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:42:04.553593  864128 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:42:04.553643  864128 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:42:04.553715  864128 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:42:04.553820  864128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:42:04.563658  864128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:42:04.590271  864128 start.go:303] post-start completed in 144.274044ms
	I1114 15:42:04.590316  864128 main.go:141] libmachine: (bridge-492851) Calling .GetConfigRaw
	I1114 15:42:04.590918  864128 main.go:141] libmachine: (bridge-492851) Calling .GetIP
	I1114 15:42:04.593610  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.594019  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:04.594081  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.594319  864128 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/config.json ...
	I1114 15:42:04.594558  864128 start.go:128] duration metric: createHost completed in 26.388381142s
	I1114 15:42:04.594584  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:04.597148  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.597531  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:04.597568  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.597716  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:04.597875  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:04.598058  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:04.598219  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:04.598381  864128 main.go:141] libmachine: Using SSH client type: native
	I1114 15:42:04.598850  864128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I1114 15:42:04.598867  864128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:42:04.725326  864128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699976524.704684952
	
	I1114 15:42:04.725351  864128 fix.go:206] guest clock: 1699976524.704684952
	I1114 15:42:04.725361  864128 fix.go:219] Guest: 2023-11-14 15:42:04.704684952 +0000 UTC Remote: 2023-11-14 15:42:04.594572775 +0000 UTC m=+39.089362519 (delta=110.112177ms)
	I1114 15:42:04.725407  864128 fix.go:190] guest clock delta is within tolerance: 110.112177ms
	I1114 15:42:04.725417  864128 start.go:83] releasing machines lock for "bridge-492851", held for 26.519747753s
	I1114 15:42:04.725457  864128 main.go:141] libmachine: (bridge-492851) Calling .DriverName
	I1114 15:42:04.725734  864128 main.go:141] libmachine: (bridge-492851) Calling .GetIP
	I1114 15:42:04.729299  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.729741  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:04.729774  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.729976  864128 main.go:141] libmachine: (bridge-492851) Calling .DriverName
	I1114 15:42:04.730504  864128 main.go:141] libmachine: (bridge-492851) Calling .DriverName
	I1114 15:42:04.730694  864128 main.go:141] libmachine: (bridge-492851) Calling .DriverName
	I1114 15:42:04.730795  864128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:42:04.730840  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:04.730901  864128 ssh_runner.go:195] Run: cat /version.json
	I1114 15:42:04.730928  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHHostname
	I1114 15:42:04.733742  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.734025  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.734188  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:04.734218  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.734404  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:04.734531  864128 main.go:141] libmachine: (bridge-492851) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:93:6b", ip: ""} in network mk-bridge-492851: {Iface:virbr1 ExpiryTime:2023-11-14 16:41:56 +0000 UTC Type:0 Mac:52:54:00:8a:93:6b Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-492851 Clientid:01:52:54:00:8a:93:6b}
	I1114 15:42:04.734564  864128 main.go:141] libmachine: (bridge-492851) DBG | domain bridge-492851 has defined IP address 192.168.72.206 and MAC address 52:54:00:8a:93:6b in network mk-bridge-492851
	I1114 15:42:04.734605  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:04.734711  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHPort
	I1114 15:42:04.734816  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:04.734908  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHKeyPath
	I1114 15:42:04.734951  864128 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/id_rsa Username:docker}
	I1114 15:42:04.735056  864128 main.go:141] libmachine: (bridge-492851) Calling .GetSSHUsername
	I1114 15:42:04.735181  864128 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/bridge-492851/id_rsa Username:docker}
	I1114 15:42:04.848647  864128 ssh_runner.go:195] Run: systemctl --version
	I1114 15:42:04.855788  864128 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:42:05.020511  864128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:42:05.029648  864128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:42:05.029732  864128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:42:05.048486  864128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:42:05.048515  864128 start.go:472] detecting cgroup driver to use...
	I1114 15:42:05.048595  864128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:42:05.065594  864128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:42:05.077997  864128 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:42:05.078057  864128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:42:05.091240  864128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:42:05.105470  864128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:42:05.234245  864128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:42:05.376231  864128 docker.go:219] disabling docker service ...
	I1114 15:42:05.376315  864128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:42:05.391490  864128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:42:05.404772  864128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:42:05.540363  864128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:42:05.611999  862303 pod_ready.go:92] pod "etcd-pause-584924" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:05.612032  862303 pod_ready.go:81] duration metric: took 12.526040425s waiting for pod "etcd-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.612045  862303 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.624444  862303 pod_ready.go:92] pod "kube-apiserver-pause-584924" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:05.624471  862303 pod_ready.go:81] duration metric: took 12.417443ms waiting for pod "kube-apiserver-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.624483  862303 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.641415  862303 pod_ready.go:92] pod "kube-controller-manager-pause-584924" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:05.641443  862303 pod_ready.go:81] duration metric: took 16.951128ms waiting for pod "kube-controller-manager-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.641456  862303 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n97hp" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.653412  862303 pod_ready.go:92] pod "kube-proxy-n97hp" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:05.653432  862303 pod_ready.go:81] duration metric: took 11.968279ms waiting for pod "kube-proxy-n97hp" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.653449  862303 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.663731  862303 pod_ready.go:92] pod "kube-scheduler-pause-584924" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:05.663751  862303 pod_ready.go:81] duration metric: took 10.292998ms waiting for pod "kube-scheduler-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:05.663760  862303 pod_ready.go:38] duration metric: took 12.593864463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:42:05.663787  862303 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:42:05.691398  862303 ops.go:34] apiserver oom_adj: -16
	I1114 15:42:05.691422  862303 kubeadm.go:640] restartCluster took 56.595274449s
	I1114 15:42:05.691433  862303 kubeadm.go:406] StartCluster complete in 56.81586964s
	I1114 15:42:05.691463  862303 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:42:05.691554  862303 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:42:05.692652  862303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:42:05.693154  862303 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:42:05.693398  862303 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:42:05.695478  862303 out.go:177] * Enabled addons: 
	I1114 15:42:05.693541  862303 config.go:182] Loaded profile config "pause-584924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:42:05.693965  862303 kapi.go:59] client config for pause-584924: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/client.crt", KeyFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/profiles/pause-584924/client.key", CAFile:"/home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c24880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 15:42:05.697079  862303 addons.go:502] enable addons completed in 3.679827ms: enabled=[]
	I1114 15:42:05.706728  862303 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-584924" context rescaled to 1 replicas
	I1114 15:42:05.706772  862303 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:42:05.708637  862303 out.go:177] * Verifying Kubernetes components...
	I1114 15:42:05.686278  864128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:42:05.706277  864128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:42:05.728989  864128 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:42:05.729064  864128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:42:05.742966  864128 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:42:05.743041  864128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:42:05.756434  864128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:42:05.766480  864128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:42:05.776237  864128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:42:05.787938  864128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:42:05.797011  864128 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:42:05.797077  864128 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:42:05.819302  864128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:42:05.832032  864128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:42:05.966698  864128 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:42:06.157960  864128 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:42:06.158046  864128 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:42:06.163624  864128 start.go:540] Will wait 60s for crictl version
	I1114 15:42:06.163686  864128 ssh_runner.go:195] Run: which crictl
	I1114 15:42:06.168346  864128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:42:06.218927  864128 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:42:06.219016  864128 ssh_runner.go:195] Run: crio --version
	I1114 15:42:06.274870  864128 ssh_runner.go:195] Run: crio --version
	I1114 15:42:06.339649  864128 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:42:02.432372  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:02.932320  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:03.432954  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:03.933213  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:04.432955  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:04.933251  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:05.432261  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:05.932341  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:06.432606  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:06.932718  862734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:42:05.711019  862303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:42:05.859358  862303 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 15:42:05.859405  862303 node_ready.go:35] waiting up to 6m0s for node "pause-584924" to be "Ready" ...
	I1114 15:42:05.864729  862303 node_ready.go:49] node "pause-584924" has status "Ready":"True"
	I1114 15:42:05.864782  862303 node_ready.go:38] duration metric: took 5.359417ms waiting for node "pause-584924" to be "Ready" ...
	I1114 15:42:05.864797  862303 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:42:06.013945  862303 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jdh5n" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:06.408671  862303 pod_ready.go:92] pod "coredns-5dd5756b68-jdh5n" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:06.408697  862303 pod_ready.go:81] duration metric: took 394.728091ms waiting for pod "coredns-5dd5756b68-jdh5n" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:06.408707  862303 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:06.808702  862303 pod_ready.go:92] pod "etcd-pause-584924" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:06.808730  862303 pod_ready.go:81] duration metric: took 400.015231ms waiting for pod "etcd-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:06.808762  862303 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:07.208334  862303 pod_ready.go:92] pod "kube-apiserver-pause-584924" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:07.208358  862303 pod_ready.go:81] duration metric: took 399.586691ms waiting for pod "kube-apiserver-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:07.208368  862303 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:07.617338  862303 pod_ready.go:92] pod "kube-controller-manager-pause-584924" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:07.617367  862303 pod_ready.go:81] duration metric: took 408.991488ms waiting for pod "kube-controller-manager-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:07.617382  862303 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n97hp" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:08.009383  862303 pod_ready.go:92] pod "kube-proxy-n97hp" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:08.009414  862303 pod_ready.go:81] duration metric: took 392.022178ms waiting for pod "kube-proxy-n97hp" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:08.009428  862303 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:08.408874  862303 pod_ready.go:92] pod "kube-scheduler-pause-584924" in "kube-system" namespace has status "Ready":"True"
	I1114 15:42:08.408905  862303 pod_ready.go:81] duration metric: took 399.468489ms waiting for pod "kube-scheduler-pause-584924" in "kube-system" namespace to be "Ready" ...
	I1114 15:42:08.408924  862303 pod_ready.go:38] duration metric: took 2.544107476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:42:08.408944  862303 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:42:08.408997  862303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:42:08.435501  862303 api_server.go:72] duration metric: took 2.728685111s to wait for apiserver process to appear ...
	I1114 15:42:08.435528  862303 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:42:08.435547  862303 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1114 15:42:08.442671  862303 api_server.go:279] https://192.168.39.22:8443/healthz returned 200:
	ok
	I1114 15:42:08.444407  862303 api_server.go:141] control plane version: v1.28.3
	I1114 15:42:08.444429  862303 api_server.go:131] duration metric: took 8.895518ms to wait for apiserver health ...
	I1114 15:42:08.444438  862303 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:42:08.611850  862303 system_pods.go:59] 6 kube-system pods found
	I1114 15:42:08.611877  862303 system_pods.go:61] "coredns-5dd5756b68-jdh5n" [d4909d89-2ca2-450b-8247-3c02fdf3a3b5] Running
	I1114 15:42:08.611882  862303 system_pods.go:61] "etcd-pause-584924" [3dedb784-3bab-4fee-80ec-47246d77571f] Running
	I1114 15:42:08.611886  862303 system_pods.go:61] "kube-apiserver-pause-584924" [8100c266-77ba-4ab4-89ca-1a367d24facb] Running
	I1114 15:42:08.611891  862303 system_pods.go:61] "kube-controller-manager-pause-584924" [bfb4ace6-ecaf-4dc6-af4c-96e70d7d4c7e] Running
	I1114 15:42:08.611895  862303 system_pods.go:61] "kube-proxy-n97hp" [1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d] Running
	I1114 15:42:08.611899  862303 system_pods.go:61] "kube-scheduler-pause-584924" [ac04c7da-bfa6-4ae6-97af-ea28af39867c] Running
	I1114 15:42:08.611905  862303 system_pods.go:74] duration metric: took 167.461194ms to wait for pod list to return data ...
	I1114 15:42:08.611914  862303 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:42:08.809996  862303 default_sa.go:45] found service account: "default"
	I1114 15:42:08.810028  862303 default_sa.go:55] duration metric: took 198.106972ms for default service account to be created ...
	I1114 15:42:08.810041  862303 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:42:09.012338  862303 system_pods.go:86] 6 kube-system pods found
	I1114 15:42:09.012369  862303 system_pods.go:89] "coredns-5dd5756b68-jdh5n" [d4909d89-2ca2-450b-8247-3c02fdf3a3b5] Running
	I1114 15:42:09.012374  862303 system_pods.go:89] "etcd-pause-584924" [3dedb784-3bab-4fee-80ec-47246d77571f] Running
	I1114 15:42:09.012379  862303 system_pods.go:89] "kube-apiserver-pause-584924" [8100c266-77ba-4ab4-89ca-1a367d24facb] Running
	I1114 15:42:09.012383  862303 system_pods.go:89] "kube-controller-manager-pause-584924" [bfb4ace6-ecaf-4dc6-af4c-96e70d7d4c7e] Running
	I1114 15:42:09.012387  862303 system_pods.go:89] "kube-proxy-n97hp" [1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d] Running
	I1114 15:42:09.012391  862303 system_pods.go:89] "kube-scheduler-pause-584924" [ac04c7da-bfa6-4ae6-97af-ea28af39867c] Running
	I1114 15:42:09.012398  862303 system_pods.go:126] duration metric: took 202.351329ms to wait for k8s-apps to be running ...
	I1114 15:42:09.012405  862303 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:42:09.012461  862303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:42:09.038393  862303 system_svc.go:56] duration metric: took 25.961689ms WaitForService to wait for kubelet.
	I1114 15:42:09.038430  862303 kubeadm.go:581] duration metric: took 3.331619958s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:42:09.038455  862303 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:42:09.210013  862303 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:42:09.210045  862303 node_conditions.go:123] node cpu capacity is 2
	I1114 15:42:09.210078  862303 node_conditions.go:105] duration metric: took 171.605221ms to run NodePressure ...
	I1114 15:42:09.210093  862303 start.go:228] waiting for startup goroutines ...
	I1114 15:42:09.210108  862303 start.go:233] waiting for cluster config update ...
	I1114 15:42:09.210119  862303 start.go:242] writing updated cluster config ...
	I1114 15:42:09.210466  862303 ssh_runner.go:195] Run: rm -f paused
	I1114 15:42:09.275488  862303 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:42:09.280881  862303 out.go:177] * Done! kubectl is now configured to use "pause-584924" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:39:00 UTC, ends at Tue 2023-11-14 15:42:10 UTC. --
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.259277438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699976530259250522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=64766db6-ce5b-46cd-941c-b9789b72012e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.260171365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28092000-b037-4b21-b228-cf908e5f994a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.260267442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=28092000-b037-4b21-b228-cf908e5f994a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.260852804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699976511807662518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699976511820411297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c,PodSandboxId:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699976506170536531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713
470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699976506151785732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51
e1d57b3c4,},Annotations:map[string]string{io.kubernetes.container.hash: ef4482c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c,PodSandboxId:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699976506112147903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b
60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6,PodSandboxId:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699976506034680269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_EXITED,CreatedAt:1699976482711980152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,},Annotations:map[string]string{io.kubernet
es.container.hash: ef4482c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_EXITED,CreatedAt:1699976469466938398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699976469371013406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c,PodSandboxId:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1699976464138884879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a,PodSandboxId:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699976463912763905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5,PodSandboxId:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699976464026484101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=28092000-b037-4b21-b228-cf908e5f994a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.322182157Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=52b3236b-c74e-42e4-98ef-63cb80e89c6c name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.322267264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=52b3236b-c74e-42e4-98ef-63cb80e89c6c name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.324122465Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9bf9c475-0dd2-419b-8336-9a3b6c90957e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.324797392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699976530324777677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=9bf9c475-0dd2-419b-8336-9a3b6c90957e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.328585264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ebb6adeb-9ca6-40f7-8712-490ec563d7af name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.328675071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ebb6adeb-9ca6-40f7-8712-490ec563d7af name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.329085689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699976511807662518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699976511820411297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c,PodSandboxId:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699976506170536531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713
470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699976506151785732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51
e1d57b3c4,},Annotations:map[string]string{io.kubernetes.container.hash: ef4482c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c,PodSandboxId:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699976506112147903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b
60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6,PodSandboxId:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699976506034680269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_EXITED,CreatedAt:1699976482711980152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,},Annotations:map[string]string{io.kubernet
es.container.hash: ef4482c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_EXITED,CreatedAt:1699976469466938398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699976469371013406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c,PodSandboxId:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1699976464138884879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a,PodSandboxId:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699976463912763905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5,PodSandboxId:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699976464026484101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ebb6adeb-9ca6-40f7-8712-490ec563d7af name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.390762328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2d5db3a8-de34-4f63-94b1-5e23a784fe7d name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.390859244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2d5db3a8-de34-4f63-94b1-5e23a784fe7d name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.393329478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ac508ad1-eabe-41c5-8dd4-c5387969a13c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.394194704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699976530394166799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=ac508ad1-eabe-41c5-8dd4-c5387969a13c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.395469280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b131388e-fbd3-4691-bfbe-633ed1f583ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.395611938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b131388e-fbd3-4691-bfbe-633ed1f583ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.396310468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699976511807662518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699976511820411297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c,PodSandboxId:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699976506170536531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713
470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699976506151785732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51
e1d57b3c4,},Annotations:map[string]string{io.kubernetes.container.hash: ef4482c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c,PodSandboxId:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699976506112147903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b
60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6,PodSandboxId:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699976506034680269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_EXITED,CreatedAt:1699976482711980152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,},Annotations:map[string]string{io.kubernet
es.container.hash: ef4482c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_EXITED,CreatedAt:1699976469466938398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699976469371013406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c,PodSandboxId:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1699976464138884879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a,PodSandboxId:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699976463912763905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5,PodSandboxId:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699976464026484101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b131388e-fbd3-4691-bfbe-633ed1f583ca name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.454959312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=651de8cf-d79b-419e-9892-d8fc8ed8750e name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.455219500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=651de8cf-d79b-419e-9892-d8fc8ed8750e name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.457063177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e1374e03-31d4-459e-855e-5187ee591651 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.457740354Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699976530457719582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=e1374e03-31d4-459e-855e-5187ee591651 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.458594877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6b114ac2-1960-44f5-9ff5-1cf0b7817ff1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.458703244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6b114ac2-1960-44f5-9ff5-1cf0b7817ff1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:10 pause-584924 crio[2512]: time="2023-11-14 15:42:10.459202618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699976511807662518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699976511820411297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c,PodSandboxId:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699976506170536531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713
470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699976506151785732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51
e1d57b3c4,},Annotations:map[string]string{io.kubernetes.container.hash: ef4482c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c,PodSandboxId:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699976506112147903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b
60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6,PodSandboxId:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699976506034680269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_EXITED,CreatedAt:1699976482711980152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,},Annotations:map[string]string{io.kubernet
es.container.hash: ef4482c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_EXITED,CreatedAt:1699976469466938398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699976469371013406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c,PodSandboxId:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1699976464138884879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a,PodSandboxId:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699976463912763905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5,PodSandboxId:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699976464026484101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6b114ac2-1960-44f5-9ff5-1cf0b7817ff1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0758c80ed28c5       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   18 seconds ago       Running             kube-proxy                2                   24c6226ef323d       kube-proxy-n97hp
	6e0166e973de1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago       Running             coredns                   2                   6ca9a82cf7ac8       coredns-5dd5756b68-jdh5n
	22c6a2f18998d       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   24 seconds ago       Running             kube-scheduler            2                   6ba15586e4744       kube-scheduler-pause-584924
	46b094bf992fe       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   24 seconds ago       Running             kube-apiserver            3                   72261464b4b76       kube-apiserver-pause-584924
	93b88ca280bdc       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   24 seconds ago       Running             kube-controller-manager   2                   922d7732d2ad1       kube-controller-manager-pause-584924
	c84581f31aa6c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago       Running             etcd                      2                   e1ec52a3ddf67       etcd-pause-584924
	23f1a21829bfb       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   47 seconds ago       Exited              kube-apiserver            2                   72261464b4b76       kube-apiserver-pause-584924
	2aab38cefaa11       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   About a minute ago   Exited              kube-proxy                1                   24c6226ef323d       kube-proxy-n97hp
	33eecf61003ec       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   1                   6ca9a82cf7ac8       coredns-5dd5756b68-jdh5n
	5a07a128ad54a       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   About a minute ago   Exited              kube-scheduler            1                   f750d24325b3d       kube-scheduler-pause-584924
	647554a693a7b       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   About a minute ago   Exited              kube-controller-manager   1                   12b4ba83a457a       kube-controller-manager-pause-584924
	c55ada6f40723       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      1                   b97c84c69c2d9       etcd-pause-584924
	
	* 
	* ==> coredns [33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35565 - 947 "HINFO IN 3670590539627910351.7552014881936575017. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010108578s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58253 - 40892 "HINFO IN 8558496786850925552.4336328656922902964. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009843603s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-584924
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-584924
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=pause-584924
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_39_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:39:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-584924
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 15:42:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 15:41:51 +0000   Tue, 14 Nov 2023 15:39:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 15:41:51 +0000   Tue, 14 Nov 2023 15:39:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 15:41:51 +0000   Tue, 14 Nov 2023 15:39:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 15:41:51 +0000   Tue, 14 Nov 2023 15:39:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    pause-584924
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 95e2f43f017a4f45a30e6960838bb782
	  System UUID:                95e2f43f-017a-4f45-a30e-6960838bb782
	  Boot ID:                    c5d0ebc1-df59-4686-af92-ea76b22027b3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jdh5n                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m23s
	  kube-system                 etcd-pause-584924                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-apiserver-pause-584924             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-controller-manager-pause-584924    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-proxy-n97hp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-scheduler-pause-584924             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m21s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  Starting                 2m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node pause-584924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node pause-584924 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m46s (x7 over 2m46s)  kubelet          Node pause-584924 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m37s                  kubelet          Node pause-584924 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m37s                  kubelet          Node pause-584924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s                  kubelet          Node pause-584924 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m37s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m36s                  kubelet          Node pause-584924 status is now: NodeReady
	  Normal  RegisteredNode           2m24s                  node-controller  Node pause-584924 event: Registered Node pause-584924 in Controller
	  Normal  Starting                 44s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)      kubelet          Node pause-584924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)      kubelet          Node pause-584924 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x7 over 44s)      kubelet          Node pause-584924 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  44s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                     node-controller  Node pause-584924 event: Registered Node pause-584924 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov14 15:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070369] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.667629] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Nov14 15:39] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152631] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.106148] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.403756] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.108703] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.172869] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.106462] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.221153] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +10.273493] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +9.295905] systemd-fstab-generator[1253]: Ignoring "noauto" for root device
	[Nov14 15:40] kauditd_printk_skb: 19 callbacks suppressed
	[Nov14 15:41] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +0.258599] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +0.351040] systemd-fstab-generator[2280]: Ignoring "noauto" for root device
	[  +0.280694] systemd-fstab-generator[2295]: Ignoring "noauto" for root device
	[  +0.583986] systemd-fstab-generator[2399]: Ignoring "noauto" for root device
	[ +21.725415] systemd-fstab-generator[3241]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a] <==
	* 
	* 
	* ==> etcd [c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6] <==
	* {"level":"info","ts":"2023-11-14T15:41:48.102993Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T15:41:48.103037Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T15:41:48.10348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 switched to configuration voters=(14835062946585175385)"}
	{"level":"info","ts":"2023-11-14T15:41:48.103636Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","added-peer-id":"cde0bb267fc4e559","added-peer-peer-urls":["https://192.168.39.22:2380"]}
	{"level":"info","ts":"2023-11-14T15:41:48.10383Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:41:48.103903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:41:48.115941Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-14T15:41:48.116184Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"cde0bb267fc4e559","initial-advertise-peer-urls":["https://192.168.39.22:2380"],"listen-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-14T15:41:48.116218Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-14T15:41:48.116276Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2023-11-14T15:41:48.116285Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2023-11-14T15:41:49.024521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-14T15:41:49.024659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:41:49.024724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 received MsgPreVoteResp from cde0bb267fc4e559 at term 2"}
	{"level":"info","ts":"2023-11-14T15:41:49.024773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became candidate at term 3"}
	{"level":"info","ts":"2023-11-14T15:41:49.024804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 received MsgVoteResp from cde0bb267fc4e559 at term 3"}
	{"level":"info","ts":"2023-11-14T15:41:49.024894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became leader at term 3"}
	{"level":"info","ts":"2023-11-14T15:41:49.024931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cde0bb267fc4e559 elected leader cde0bb267fc4e559 at term 3"}
	{"level":"info","ts":"2023-11-14T15:41:49.031691Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cde0bb267fc4e559","local-member-attributes":"{Name:pause-584924 ClientURLs:[https://192.168.39.22:2379]}","request-path":"/0/members/cde0bb267fc4e559/attributes","cluster-id":"eaed0234649c774e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:41:49.031946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:41:49.033425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:41:49.034857Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.22:2379"}
	{"level":"info","ts":"2023-11-14T15:41:49.035891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:41:49.037697Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:41:49.037754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  15:42:11 up 3 min,  0 users,  load average: 1.91, 0.86, 0.33
	Linux pause-584924 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336] <==
	* W1114 15:41:38.700187       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1114 15:41:41.267433       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1114 15:41:41.443309       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1114 15:41:43.868147       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	* 
	* ==> kube-apiserver [46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc] <==
	* I1114 15:41:50.855904       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1114 15:41:50.855960       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1114 15:41:50.856267       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1114 15:41:50.856431       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1114 15:41:51.024447       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 15:41:51.043938       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1114 15:41:51.044474       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1114 15:41:51.046851       1 shared_informer.go:318] Caches are synced for configmaps
	I1114 15:41:51.046924       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 15:41:51.054064       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 15:41:51.057432       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1114 15:41:51.057500       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1114 15:41:51.056802       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1114 15:41:51.058139       1 aggregator.go:166] initial CRD sync complete...
	I1114 15:41:51.058168       1 autoregister_controller.go:141] Starting autoregister controller
	I1114 15:41:51.058189       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1114 15:41:51.058211       1 cache.go:39] Caches are synced for autoregister controller
	I1114 15:41:51.870476       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1114 15:41:52.900558       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1114 15:41:52.933889       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1114 15:41:52.999785       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1114 15:41:53.039684       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 15:41:53.049682       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1114 15:42:04.159142       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 15:42:04.262763       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5] <==
	* 
	* 
	* ==> kube-controller-manager [93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c] <==
	* I1114 15:42:04.044890       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1114 15:42:04.044951       1 taint_manager.go:211] "Sending events to api server"
	I1114 15:42:04.045688       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-584924"
	I1114 15:42:04.045764       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1114 15:42:04.045904       1 event.go:307] "Event occurred" object="pause-584924" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-584924 event: Registered Node pause-584924 in Controller"
	I1114 15:42:04.048202       1 shared_informer.go:318] Caches are synced for crt configmap
	I1114 15:42:04.048535       1 shared_informer.go:318] Caches are synced for PVC protection
	I1114 15:42:04.049806       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1114 15:42:04.049886       1 shared_informer.go:318] Caches are synced for attach detach
	I1114 15:42:04.050196       1 shared_informer.go:318] Caches are synced for endpoint
	I1114 15:42:04.051984       1 shared_informer.go:318] Caches are synced for ephemeral
	I1114 15:42:04.057647       1 shared_informer.go:318] Caches are synced for TTL
	I1114 15:42:04.059311       1 shared_informer.go:318] Caches are synced for PV protection
	I1114 15:42:04.063812       1 shared_informer.go:318] Caches are synced for service account
	I1114 15:42:04.073295       1 shared_informer.go:318] Caches are synced for namespace
	I1114 15:42:04.076487       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1114 15:42:04.113462       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1114 15:42:04.163749       1 shared_informer.go:318] Caches are synced for resource quota
	I1114 15:42:04.200228       1 shared_informer.go:318] Caches are synced for resource quota
	I1114 15:42:04.234601       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1114 15:42:04.249531       1 shared_informer.go:318] Caches are synced for job
	I1114 15:42:04.253414       1 shared_informer.go:318] Caches are synced for cronjob
	I1114 15:42:04.600460       1 shared_informer.go:318] Caches are synced for garbage collector
	I1114 15:42:04.600550       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1114 15:42:04.610187       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130] <==
	* I1114 15:41:52.189055       1 server_others.go:69] "Using iptables proxy"
	I1114 15:41:52.208855       1 node.go:141] Successfully retrieved node IP: 192.168.39.22
	I1114 15:41:52.284562       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 15:41:52.284641       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 15:41:52.293121       1 server_others.go:152] "Using iptables Proxier"
	I1114 15:41:52.293220       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 15:41:52.293495       1 server.go:846] "Version info" version="v1.28.3"
	I1114 15:41:52.293512       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:41:52.295497       1 config.go:188] "Starting service config controller"
	I1114 15:41:52.295542       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 15:41:52.295562       1 config.go:97] "Starting endpoint slice config controller"
	I1114 15:41:52.295566       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 15:41:52.296046       1 config.go:315] "Starting node config controller"
	I1114 15:41:52.296054       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 15:41:52.396462       1 shared_informer.go:318] Caches are synced for node config
	I1114 15:41:52.396573       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 15:41:52.396473       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87] <==
	* I1114 15:41:09.733074       1 server_others.go:69] "Using iptables proxy"
	E1114 15:41:09.736902       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-584924": dial tcp 192.168.39.22:8443: connect: connection refused
	E1114 15:41:10.929844       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-584924": dial tcp 192.168.39.22:8443: connect: connection refused
	E1114 15:41:13.265847       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-584924": dial tcp 192.168.39.22:8443: connect: connection refused
	E1114 15:41:17.674908       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-584924": dial tcp 192.168.39.22:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c] <==
	* I1114 15:41:48.992001       1 serving.go:348] Generated self-signed cert in-memory
	W1114 15:41:50.963985       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1114 15:41:50.964104       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 15:41:50.964147       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 15:41:50.964178       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 15:41:51.015113       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1114 15:41:51.015219       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:41:51.022807       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1114 15:41:51.023032       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 15:41:51.024605       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1114 15:41:51.024713       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1114 15:41:51.124279       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c] <==
	* 
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:39:00 UTC, ends at Tue 2023-11-14 15:42:14 UTC. --
	Nov 14 15:41:45 pause-584924 kubelet[3247]: I1114 15:41:45.108500    3247 scope.go:117] "RemoveContainer" containerID="5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c"
	Nov 14 15:41:45 pause-584924 kubelet[3247]: I1114 15:41:45.109559    3247 scope.go:117] "RemoveContainer" containerID="647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5"
	Nov 14 15:41:45 pause-584924 kubelet[3247]: E1114 15:41:45.278908    3247 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-584924?timeout=10s\": dial tcp 192.168.39.22:8443: connect: connection refused" interval="800ms"
	Nov 14 15:41:45 pause-584924 kubelet[3247]: I1114 15:41:45.813250    3247 scope.go:117] "RemoveContainer" containerID="23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336"
	Nov 14 15:41:46 pause-584924 kubelet[3247]: E1114 15:41:46.087178    3247 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-584924?timeout=10s\": dial tcp 192.168.39.22:8443: connect: connection refused" interval="1.6s"
	Nov 14 15:41:46 pause-584924 kubelet[3247]: W1114 15:41:46.648745    3247 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	Nov 14 15:41:46 pause-584924 kubelet[3247]: E1114 15:41:46.648822    3247 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	Nov 14 15:41:46 pause-584924 kubelet[3247]: I1114 15:41:46.684727    3247 kubelet_node_status.go:70] "Attempting to register node" node="pause-584924"
	Nov 14 15:41:46 pause-584924 kubelet[3247]: E1114 15:41:46.685292    3247 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.22:8443: connect: connection refused" node="pause-584924"
	Nov 14 15:41:46 pause-584924 kubelet[3247]: E1114 15:41:46.781495    3247 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-584924\" not found"
	Nov 14 15:41:47 pause-584924 kubelet[3247]: W1114 15:41:47.047666    3247 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	Nov 14 15:41:47 pause-584924 kubelet[3247]: E1114 15:41:47.047723    3247 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	Nov 14 15:41:49 pause-584924 kubelet[3247]: I1114 15:41:49.887184    3247 kubelet_node_status.go:70] "Attempting to register node" node="pause-584924"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.081914    3247 kubelet_node_status.go:108] "Node was previously registered" node="pause-584924"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.082112    3247 kubelet_node_status.go:73] "Successfully registered node" node="pause-584924"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.083942    3247 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.085022    3247 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.457268    3247 apiserver.go:52] "Watching apiserver"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.462919    3247 topology_manager.go:215] "Topology Admit Handler" podUID="d4909d89-2ca2-450b-8247-3c02fdf3a3b5" podNamespace="kube-system" podName="coredns-5dd5756b68-jdh5n"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.463106    3247 topology_manager.go:215] "Topology Admit Handler" podUID="1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d" podNamespace="kube-system" podName="kube-proxy-n97hp"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.479813    3247 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.505697    3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d-lib-modules\") pod \"kube-proxy-n97hp\" (UID: \"1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d\") " pod="kube-system/kube-proxy-n97hp"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.505779    3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d-xtables-lock\") pod \"kube-proxy-n97hp\" (UID: \"1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d\") " pod="kube-system/kube-proxy-n97hp"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.764090    3247 scope.go:117] "RemoveContainer" containerID="2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.766235    3247 scope.go:117] "RemoveContainer" containerID="33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-584924 -n pause-584924
helpers_test.go:261: (dbg) Run:  kubectl --context pause-584924 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-584924 -n pause-584924
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-584924 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-584924 logs -n 25: (1.910132235s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo docker                        | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo cat                           | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo                               | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo find                          | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p flannel-492851 sudo crio                          | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p flannel-492851                                    | flannel-492851 | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC | 14 Nov 23 15:42 UTC |
	| start   | -p calico-492851 --memory=3072                       | calico-492851  | jenkins | v1.32.0 | 14 Nov 23 15:42 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:42:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:42:15.100616  866067 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:42:15.101078  866067 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:42:15.101119  866067 out.go:309] Setting ErrFile to fd 2...
	I1114 15:42:15.101138  866067 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:42:15.101708  866067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:42:15.102612  866067 out.go:303] Setting JSON to false
	I1114 15:42:15.104395  866067 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":44687,"bootTime":1699931848,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:42:15.104482  866067 start.go:138] virtualization: kvm guest
	I1114 15:42:15.107020  866067 out.go:177] * [calico-492851] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:42:15.108688  866067 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:42:15.108640  866067 notify.go:220] Checking for updates...
	I1114 15:42:15.110794  866067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:42:15.112647  866067 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:42:15.114097  866067 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:42:15.115552  866067 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:42:15.124788  866067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:42:15.128610  866067 config.go:182] Loaded profile config "bridge-492851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:42:15.128862  866067 config.go:182] Loaded profile config "enable-default-cni-492851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:42:15.129118  866067 config.go:182] Loaded profile config "pause-584924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:42:15.129260  866067 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:42:15.185042  866067 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 15:42:15.186495  866067 start.go:298] selected driver: kvm2
	I1114 15:42:15.186515  866067 start.go:902] validating driver "kvm2" against <nil>
	I1114 15:42:15.186536  866067 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:42:15.187561  866067 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:42:15.187657  866067 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:42:15.204264  866067 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:42:15.204320  866067 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 15:42:15.204604  866067 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:42:15.204694  866067 cni.go:84] Creating CNI manager for "calico"
	I1114 15:42:15.204714  866067 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1114 15:42:15.204727  866067 start_flags.go:323] config:
	{Name:calico-492851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:calico-492851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:42:15.204961  866067 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:42:15.206968  866067 out.go:177] * Starting control plane node calico-492851 in cluster calico-492851
	I1114 15:42:15.208360  866067 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:42:15.208403  866067 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:42:15.208422  866067 cache.go:56] Caching tarball of preloaded images
	I1114 15:42:15.208512  866067 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:42:15.208523  866067 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:42:15.208643  866067 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/config.json ...
	I1114 15:42:15.208665  866067 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/config.json: {Name:mk45cbf794f0d4e97c72906de5b2c55b89497650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:42:15.208854  866067 start.go:365] acquiring machines lock for calico-492851: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:42:15.208908  866067 start.go:369] acquired machines lock for "calico-492851" in 30.63µs
	I1114 15:42:15.208934  866067 start.go:93] Provisioning new machine with config: &{Name:calico-492851 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:calico-492851 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:42:15.209076  866067 start.go:125] createHost starting for "" (driver="kvm2")
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:39:00 UTC, ends at Tue 2023-11-14 15:42:16 UTC. --
	Nov 14 15:42:15 pause-584924 crio[2512]: time="2023-11-14 15:42:15.974953441Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jdh5n,Uid:d4909d89-2ca2-450b-8247-3c02fdf3a3b5,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1699976467002663725,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-14T15:39:47.970463724Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-584924,Uid:f681713470efeb90d13bbb1400c11f63,Namespace:kube-system,
Attempt:2,},State:SANDBOX_READY,CreatedAt:1699976466937798086,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f681713470efeb90d13bbb1400c11f63,kubernetes.io/config.seen: 2023-11-14T15:39:33.654438474Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&PodSandboxMetadata{Name:etcd-pause-584924,Uid:0a4aa608ca82255f500ade68737f51f0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1699976466931942117,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,tier: control-plane,},Annotations:map
[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.22:2379,kubernetes.io/config.hash: 0a4aa608ca82255f500ade68737f51f0,kubernetes.io/config.seen: 2023-11-14T15:39:33.654429055Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&PodSandboxMetadata{Name:kube-proxy-n97hp,Uid:1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1699976466922746696,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-14T15:39:47.847510539Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b
4f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-584924,Uid:09064f3e3afb2d5af6ebc51e1d57b3c4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1699976466807748995,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.22:8443,kubernetes.io/config.hash: 09064f3e3afb2d5af6ebc51e1d57b3c4,kubernetes.io/config.seen: 2023-11-14T15:39:33.654436214Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-584924,Uid:83b60f15f6e163bbfb03259506a81e2f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1699976466690233136,Labels:map[string]string
{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 83b60f15f6e163bbfb03259506a81e2f,kubernetes.io/config.seen: 2023-11-14T15:39:33.654437517Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-584924,Uid:83b60f15f6e163bbfb03259506a81e2f,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1699976462129703901,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,tier: control-plane,},Annotations:map[string]
string{kubernetes.io/config.hash: 83b60f15f6e163bbfb03259506a81e2f,kubernetes.io/config.seen: 2023-11-14T15:39:33.654437517Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-584924,Uid:f681713470efeb90d13bbb1400c11f63,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1699976461659031010,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f681713470efeb90d13bbb1400c11f63,kubernetes.io/config.seen: 2023-11-14T15:39:33.654438474Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&PodSandboxMetadata{Name:etcd-pa
use-584924,Uid:0a4aa608ca82255f500ade68737f51f0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1699976461509943164,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.22:2379,kubernetes.io/config.hash: 0a4aa608ca82255f500ade68737f51f0,kubernetes.io/config.seen: 2023-11-14T15:39:33.654429055Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:890040a18d6682ad0ffd8c87b980db4aac4d3604101fcb51e71ceb72a1730253,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-584924,Uid:09064f3e3afb2d5af6ebc51e1d57b3c4,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1699976461445676903,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.22:8443,kubernetes.io/config.hash: 09064f3e3afb2d5af6ebc51e1d57b3c4,kubernetes.io/config.seen: 2023-11-14T15:39:33.654436214Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=774c20e6-16f4-411a-a3bf-c185e6f411c4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 14 15:42:15 pause-584924 crio[2512]: time="2023-11-14 15:42:15.975939806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e4f38638-bc36-44cc-a09c-7485a966f5b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:15 pause-584924 crio[2512]: time="2023-11-14 15:42:15.976049678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e4f38638-bc36-44cc-a09c-7485a966f5b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:15 pause-584924 crio[2512]: time="2023-11-14 15:42:15.976832233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699976511807662518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699976511820411297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c,PodSandboxId:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699976506170536531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713
470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699976506151785732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51
e1d57b3c4,},Annotations:map[string]string{io.kubernetes.container.hash: ef4482c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c,PodSandboxId:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699976506112147903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b
60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6,PodSandboxId:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699976506034680269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_EXITED,CreatedAt:1699976482711980152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,},Annotations:map[string]string{io.kubernet
es.container.hash: ef4482c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_EXITED,CreatedAt:1699976469466938398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699976469371013406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c,PodSandboxId:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1699976464138884879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a,PodSandboxId:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699976463912763905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5,PodSandboxId:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699976464026484101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e4f38638-bc36-44cc-a09c-7485a966f5b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.014881149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4ebc1df3-0d7c-4245-9f71-5d75197cbc0e name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.015084260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4ebc1df3-0d7c-4245-9f71-5d75197cbc0e name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.019329716Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2f61b7d6-2efa-432a-86a3-f54c28f9d61d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.020023647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699976536019975874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=2f61b7d6-2efa-432a-86a3-f54c28f9d61d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.021984241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b106800-ec28-4c0a-bd29-e49e0a04091a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.022099316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b106800-ec28-4c0a-bd29-e49e0a04091a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.022920315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699976511807662518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699976511820411297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c,PodSandboxId:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699976506170536531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713
470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699976506151785732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51
e1d57b3c4,},Annotations:map[string]string{io.kubernetes.container.hash: ef4482c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c,PodSandboxId:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699976506112147903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b
60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6,PodSandboxId:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699976506034680269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_EXITED,CreatedAt:1699976482711980152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,},Annotations:map[string]string{io.kubernet
es.container.hash: ef4482c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_EXITED,CreatedAt:1699976469466938398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699976469371013406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c,PodSandboxId:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1699976464138884879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a,PodSandboxId:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699976463912763905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5,PodSandboxId:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699976464026484101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b106800-ec28-4c0a-bd29-e49e0a04091a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.093132612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=09f8e719-77e5-4273-83ad-65210fe758d8 name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.093215857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=09f8e719-77e5-4273-83ad-65210fe758d8 name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.095142323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=20c3f85b-11ad-41db-9c67-a1f5f032c4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.096039986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699976536096019093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=20c3f85b-11ad-41db-9c67-a1f5f032c4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.096856836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ca087194-21b0-485e-afad-80cac30928dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.096953377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ca087194-21b0-485e-afad-80cac30928dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.097437929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699976511807662518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699976511820411297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c,PodSandboxId:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699976506170536531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713
470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699976506151785732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51
e1d57b3c4,},Annotations:map[string]string{io.kubernetes.container.hash: ef4482c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c,PodSandboxId:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699976506112147903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b
60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6,PodSandboxId:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699976506034680269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_EXITED,CreatedAt:1699976482711980152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,},Annotations:map[string]string{io.kubernet
es.container.hash: ef4482c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_EXITED,CreatedAt:1699976469466938398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699976469371013406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c,PodSandboxId:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1699976464138884879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a,PodSandboxId:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699976463912763905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5,PodSandboxId:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699976464026484101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ca087194-21b0-485e-afad-80cac30928dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.173458313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9d51c9a9-70e4-4366-953b-215ad7aff7aa name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.173549065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9d51c9a9-70e4-4366-953b-215ad7aff7aa name=/runtime.v1.RuntimeService/Version
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.175741714Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0911ec5e-3cd0-4bbf-b2ff-74449993812d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.176323248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699976536176304680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=0911ec5e-3cd0-4bbf-b2ff-74449993812d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.177135087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fa1d86f8-ea0b-4875-88cc-e3504368752b name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.177204280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fa1d86f8-ea0b-4875-88cc-e3504368752b name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 15:42:16 pause-584924 crio[2512]: time="2023-11-14 15:42:16.177780332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699976511807662518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699976511820411297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c,PodSandboxId:6ba15586e47447a2553b9d6f17e66e644d3d7e11761af27699cbe46ebbf9eb7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699976506170536531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f681713
470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699976506151785732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51
e1d57b3c4,},Annotations:map[string]string{io.kubernetes.container.hash: ef4482c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c,PodSandboxId:922d7732d2ad1edba1f225ccd138f2e042232639caa34af2c8d229485d8abc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699976506112147903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b
60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6,PodSandboxId:e1ec52a3ddf678bffd2ee6db592e43ae27341c7dbf75ecbe21a14caa89ab0982,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699976506034680269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336,PodSandboxId:72261464b4b76dcb69437bdd7f38300b284853a0f1a13fb7502e5986eefb8b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_EXITED,CreatedAt:1699976482711980152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09064f3e3afb2d5af6ebc51e1d57b3c4,},Annotations:map[string]string{io.kubernet
es.container.hash: ef4482c5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87,PodSandboxId:24c6226ef323d3c87ec722116fd68bde4f304a367250bd688b15cb1a791dfdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_EXITED,CreatedAt:1699976469466938398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n97hp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d,},Annotations:map[string]string{io.kubernetes.container.hash: 61e95eb3,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b,PodSandboxId:6ca9a82cf7ac84c03f1598be8cb6e404b0b15b253dbe68e5a8374d5ff0f68cf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699976469371013406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jdh5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4909d89-2ca2-450b-8247-3c02fdf3a3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e05b51a8,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c,PodSandboxId:f750d24325b3d93156bc943e81a4b613086c159fd73806cf3f52b1940336f03f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1699976464138884879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-584924,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f681713470efeb90d13bbb1400c11f63,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a,PodSandboxId:b97c84c69c2d974a86590e5b43c6696c08553b7ecd2b5467422cbd24cabca3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699976463912763905,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4aa608ca82255f500ade68737f51f0,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 60d8d50f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5,PodSandboxId:12b4ba83a457a451eb2e423aa11221dc1cb6395f958ba6b3fb628cbe17fe2978,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699976464026484101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-584924,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b60f15f6e163bbfb03259506a81e2f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fa1d86f8-ea0b-4875-88cc-e3504368752b name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0758c80ed28c5       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   24 seconds ago       Running             kube-proxy                2                   24c6226ef323d       kube-proxy-n97hp
	6e0166e973de1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   24 seconds ago       Running             coredns                   2                   6ca9a82cf7ac8       coredns-5dd5756b68-jdh5n
	22c6a2f18998d       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   30 seconds ago       Running             kube-scheduler            2                   6ba15586e4744       kube-scheduler-pause-584924
	46b094bf992fe       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   30 seconds ago       Running             kube-apiserver            3                   72261464b4b76       kube-apiserver-pause-584924
	93b88ca280bdc       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   30 seconds ago       Running             kube-controller-manager   2                   922d7732d2ad1       kube-controller-manager-pause-584924
	c84581f31aa6c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   30 seconds ago       Running             etcd                      2                   e1ec52a3ddf67       etcd-pause-584924
	23f1a21829bfb       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   53 seconds ago       Exited              kube-apiserver            2                   72261464b4b76       kube-apiserver-pause-584924
	2aab38cefaa11       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   About a minute ago   Exited              kube-proxy                1                   24c6226ef323d       kube-proxy-n97hp
	33eecf61003ec       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   1                   6ca9a82cf7ac8       coredns-5dd5756b68-jdh5n
	5a07a128ad54a       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   About a minute ago   Exited              kube-scheduler            1                   f750d24325b3d       kube-scheduler-pause-584924
	647554a693a7b       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   About a minute ago   Exited              kube-controller-manager   1                   12b4ba83a457a       kube-controller-manager-pause-584924
	c55ada6f40723       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      1                   b97c84c69c2d9       etcd-pause-584924
	
	* 
	* ==> coredns [33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35565 - 947 "HINFO IN 3670590539627910351.7552014881936575017. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010108578s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [6e0166e973de1d359dc2d5687479af08246a6fdc4a42d1b4babedb3ff95ef027] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58253 - 40892 "HINFO IN 8558496786850925552.4336328656922902964. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009843603s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-584924
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-584924
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=pause-584924
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_39_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:39:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-584924
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 15:42:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 15:41:51 +0000   Tue, 14 Nov 2023 15:39:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 15:41:51 +0000   Tue, 14 Nov 2023 15:39:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 15:41:51 +0000   Tue, 14 Nov 2023 15:39:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 15:41:51 +0000   Tue, 14 Nov 2023 15:39:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    pause-584924
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 95e2f43f017a4f45a30e6960838bb782
	  System UUID:                95e2f43f-017a-4f45-a30e-6960838bb782
	  Boot ID:                    c5d0ebc1-df59-4686-af92-ea76b22027b3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jdh5n                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m29s
	  kube-system                 etcd-pause-584924                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-apiserver-pause-584924             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-controller-manager-pause-584924    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-proxy-n97hp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-scheduler-pause-584924             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m26s                  kube-proxy       
	  Normal  Starting                 24s                    kube-proxy       
	  Normal  Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node pause-584924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node pause-584924 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x7 over 2m52s)  kubelet          Node pause-584924 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m43s                  kubelet          Node pause-584924 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m43s                  kubelet          Node pause-584924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s                  kubelet          Node pause-584924 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m42s                  kubelet          Node pause-584924 status is now: NodeReady
	  Normal  RegisteredNode           2m30s                  node-controller  Node pause-584924 event: Registered Node pause-584924 in Controller
	  Normal  Starting                 50s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)      kubelet          Node pause-584924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)      kubelet          Node pause-584924 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 50s)      kubelet          Node pause-584924 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  50s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                    node-controller  Node pause-584924 event: Registered Node pause-584924 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070369] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.667629] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Nov14 15:39] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152631] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.106148] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.403756] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.108703] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.172869] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.106462] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.221153] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +10.273493] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +9.295905] systemd-fstab-generator[1253]: Ignoring "noauto" for root device
	[Nov14 15:40] kauditd_printk_skb: 19 callbacks suppressed
	[Nov14 15:41] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +0.258599] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +0.351040] systemd-fstab-generator[2280]: Ignoring "noauto" for root device
	[  +0.280694] systemd-fstab-generator[2295]: Ignoring "noauto" for root device
	[  +0.583986] systemd-fstab-generator[2399]: Ignoring "noauto" for root device
	[ +21.725415] systemd-fstab-generator[3241]: Ignoring "noauto" for root device
	[Nov14 15:42] hrtimer: interrupt took 5960249 ns
	
	* 
	* ==> etcd [c55ada6f4072325f75f015db321b53a5a4f83b9f21475410b45779d484aaaf7a] <==
	* 
	* 
	* ==> etcd [c84581f31aa6c41f17c387e6c2f5d65d5da73f85ee0cfa2c0170b7199b8ab9b6] <==
	* {"level":"info","ts":"2023-11-14T15:41:48.10383Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:41:48.103903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:41:48.115941Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-14T15:41:48.116184Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"cde0bb267fc4e559","initial-advertise-peer-urls":["https://192.168.39.22:2380"],"listen-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-14T15:41:48.116218Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-14T15:41:48.116276Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2023-11-14T15:41:48.116285Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2023-11-14T15:41:49.024521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-14T15:41:49.024659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:41:49.024724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 received MsgPreVoteResp from cde0bb267fc4e559 at term 2"}
	{"level":"info","ts":"2023-11-14T15:41:49.024773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became candidate at term 3"}
	{"level":"info","ts":"2023-11-14T15:41:49.024804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 received MsgVoteResp from cde0bb267fc4e559 at term 3"}
	{"level":"info","ts":"2023-11-14T15:41:49.024894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became leader at term 3"}
	{"level":"info","ts":"2023-11-14T15:41:49.024931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cde0bb267fc4e559 elected leader cde0bb267fc4e559 at term 3"}
	{"level":"info","ts":"2023-11-14T15:41:49.031691Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cde0bb267fc4e559","local-member-attributes":"{Name:pause-584924 ClientURLs:[https://192.168.39.22:2379]}","request-path":"/0/members/cde0bb267fc4e559/attributes","cluster-id":"eaed0234649c774e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:41:49.031946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:41:49.033425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:41:49.034857Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.22:2379"}
	{"level":"info","ts":"2023-11-14T15:41:49.035891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:41:49.037697Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:41:49.037754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:42:11.921876Z","caller":"traceutil/trace.go:171","msg":"trace[148275023] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"308.283778ms","start":"2023-11-14T15:42:11.613548Z","end":"2023-11-14T15:42:11.921832Z","steps":["trace[148275023] 'process raft request'  (duration: 280.177549ms)","trace[148275023] 'compare'  (duration: 27.584896ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-14T15:42:11.923252Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T15:42:11.613531Z","time spent":"308.489816ms","remote":"127.0.0.1:41630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-584924\" mod_revision:488 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-584924\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-584924\" > >"}
	{"level":"warn","ts":"2023-11-14T15:42:12.613903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.751783ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16526394026622089551 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.22\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.39.22\" value_size:66 lease:7303021989767313741 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.22\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-14T15:42:12.614187Z","caller":"traceutil/trace.go:171","msg":"trace[623537687] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"169.648501ms","start":"2023-11-14T15:42:12.444522Z","end":"2023-11-14T15:42:12.614171Z","steps":["trace[623537687] 'process raft request'  (duration: 56.018147ms)","trace[623537687] 'compare'  (duration: 112.637066ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  15:42:16 up 3 min,  0 users,  load average: 1.76, 0.85, 0.33
	Linux pause-584924 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336] <==
	* W1114 15:41:38.700187       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1114 15:41:41.267433       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1114 15:41:41.443309       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1114 15:41:43.868147       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	* 
	* ==> kube-apiserver [46b094bf992fe694322a919da3579f4e3c8f488d673b8095e2c2fafbc8e860dc] <==
	* I1114 15:41:50.855904       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1114 15:41:50.855960       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1114 15:41:50.856267       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1114 15:41:50.856431       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1114 15:41:51.024447       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 15:41:51.043938       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1114 15:41:51.044474       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1114 15:41:51.046851       1 shared_informer.go:318] Caches are synced for configmaps
	I1114 15:41:51.046924       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 15:41:51.054064       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 15:41:51.057432       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1114 15:41:51.057500       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1114 15:41:51.056802       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1114 15:41:51.058139       1 aggregator.go:166] initial CRD sync complete...
	I1114 15:41:51.058168       1 autoregister_controller.go:141] Starting autoregister controller
	I1114 15:41:51.058189       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1114 15:41:51.058211       1 cache.go:39] Caches are synced for autoregister controller
	I1114 15:41:51.870476       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1114 15:41:52.900558       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1114 15:41:52.933889       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1114 15:41:52.999785       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1114 15:41:53.039684       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 15:41:53.049682       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1114 15:42:04.159142       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 15:42:04.262763       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5] <==
	* 
	* 
	* ==> kube-controller-manager [93b88ca280bdce84316c42abfc46450df53b1da5c1c25f9784934310ba101c0c] <==
	* I1114 15:42:04.044890       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1114 15:42:04.044951       1 taint_manager.go:211] "Sending events to api server"
	I1114 15:42:04.045688       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-584924"
	I1114 15:42:04.045764       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1114 15:42:04.045904       1 event.go:307] "Event occurred" object="pause-584924" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-584924 event: Registered Node pause-584924 in Controller"
	I1114 15:42:04.048202       1 shared_informer.go:318] Caches are synced for crt configmap
	I1114 15:42:04.048535       1 shared_informer.go:318] Caches are synced for PVC protection
	I1114 15:42:04.049806       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1114 15:42:04.049886       1 shared_informer.go:318] Caches are synced for attach detach
	I1114 15:42:04.050196       1 shared_informer.go:318] Caches are synced for endpoint
	I1114 15:42:04.051984       1 shared_informer.go:318] Caches are synced for ephemeral
	I1114 15:42:04.057647       1 shared_informer.go:318] Caches are synced for TTL
	I1114 15:42:04.059311       1 shared_informer.go:318] Caches are synced for PV protection
	I1114 15:42:04.063812       1 shared_informer.go:318] Caches are synced for service account
	I1114 15:42:04.073295       1 shared_informer.go:318] Caches are synced for namespace
	I1114 15:42:04.076487       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1114 15:42:04.113462       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1114 15:42:04.163749       1 shared_informer.go:318] Caches are synced for resource quota
	I1114 15:42:04.200228       1 shared_informer.go:318] Caches are synced for resource quota
	I1114 15:42:04.234601       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1114 15:42:04.249531       1 shared_informer.go:318] Caches are synced for job
	I1114 15:42:04.253414       1 shared_informer.go:318] Caches are synced for cronjob
	I1114 15:42:04.600460       1 shared_informer.go:318] Caches are synced for garbage collector
	I1114 15:42:04.600550       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1114 15:42:04.610187       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [0758c80ed28c59e6ac78854a5b8c574a6b7432c74436f1c5645c53ec487b5130] <==
	* I1114 15:41:52.189055       1 server_others.go:69] "Using iptables proxy"
	I1114 15:41:52.208855       1 node.go:141] Successfully retrieved node IP: 192.168.39.22
	I1114 15:41:52.284562       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 15:41:52.284641       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 15:41:52.293121       1 server_others.go:152] "Using iptables Proxier"
	I1114 15:41:52.293220       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 15:41:52.293495       1 server.go:846] "Version info" version="v1.28.3"
	I1114 15:41:52.293512       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:41:52.295497       1 config.go:188] "Starting service config controller"
	I1114 15:41:52.295542       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 15:41:52.295562       1 config.go:97] "Starting endpoint slice config controller"
	I1114 15:41:52.295566       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 15:41:52.296046       1 config.go:315] "Starting node config controller"
	I1114 15:41:52.296054       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 15:41:52.396462       1 shared_informer.go:318] Caches are synced for node config
	I1114 15:41:52.396573       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 15:41:52.396473       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87] <==
	* I1114 15:41:09.733074       1 server_others.go:69] "Using iptables proxy"
	E1114 15:41:09.736902       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-584924": dial tcp 192.168.39.22:8443: connect: connection refused
	E1114 15:41:10.929844       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-584924": dial tcp 192.168.39.22:8443: connect: connection refused
	E1114 15:41:13.265847       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-584924": dial tcp 192.168.39.22:8443: connect: connection refused
	E1114 15:41:17.674908       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-584924": dial tcp 192.168.39.22:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [22c6a2f18998ded678533a341911b01f628d618ac9dedf6e15b7f444e902f17c] <==
	* I1114 15:41:48.992001       1 serving.go:348] Generated self-signed cert in-memory
	W1114 15:41:50.963985       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1114 15:41:50.964104       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 15:41:50.964147       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 15:41:50.964178       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 15:41:51.015113       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1114 15:41:51.015219       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:41:51.022807       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1114 15:41:51.023032       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 15:41:51.024605       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1114 15:41:51.024713       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1114 15:41:51.124279       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c] <==
	* 
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:39:00 UTC, ends at Tue 2023-11-14 15:42:17 UTC. --
	Nov 14 15:41:45 pause-584924 kubelet[3247]: I1114 15:41:45.108500    3247 scope.go:117] "RemoveContainer" containerID="5a07a128ad54ade841cb87085eb51fbe26707915addb44495324052072a6b98c"
	Nov 14 15:41:45 pause-584924 kubelet[3247]: I1114 15:41:45.109559    3247 scope.go:117] "RemoveContainer" containerID="647554a693a7b1de2b2376ae36c3cfb8000a0f6c69dec56a60482bff838eabc5"
	Nov 14 15:41:45 pause-584924 kubelet[3247]: E1114 15:41:45.278908    3247 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-584924?timeout=10s\": dial tcp 192.168.39.22:8443: connect: connection refused" interval="800ms"
	Nov 14 15:41:45 pause-584924 kubelet[3247]: I1114 15:41:45.813250    3247 scope.go:117] "RemoveContainer" containerID="23f1a21829bfba2fff2ea7f8e5e97784909580d2faea62989797f0940184e336"
	Nov 14 15:41:46 pause-584924 kubelet[3247]: E1114 15:41:46.087178    3247 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-584924?timeout=10s\": dial tcp 192.168.39.22:8443: connect: connection refused" interval="1.6s"
	Nov 14 15:41:46 pause-584924 kubelet[3247]: W1114 15:41:46.648745    3247 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	Nov 14 15:41:46 pause-584924 kubelet[3247]: E1114 15:41:46.648822    3247 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	Nov 14 15:41:46 pause-584924 kubelet[3247]: I1114 15:41:46.684727    3247 kubelet_node_status.go:70] "Attempting to register node" node="pause-584924"
	Nov 14 15:41:46 pause-584924 kubelet[3247]: E1114 15:41:46.685292    3247 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.22:8443: connect: connection refused" node="pause-584924"
	Nov 14 15:41:46 pause-584924 kubelet[3247]: E1114 15:41:46.781495    3247 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-584924\" not found"
	Nov 14 15:41:47 pause-584924 kubelet[3247]: W1114 15:41:47.047666    3247 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	Nov 14 15:41:47 pause-584924 kubelet[3247]: E1114 15:41:47.047723    3247 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	Nov 14 15:41:49 pause-584924 kubelet[3247]: I1114 15:41:49.887184    3247 kubelet_node_status.go:70] "Attempting to register node" node="pause-584924"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.081914    3247 kubelet_node_status.go:108] "Node was previously registered" node="pause-584924"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.082112    3247 kubelet_node_status.go:73] "Successfully registered node" node="pause-584924"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.083942    3247 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.085022    3247 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.457268    3247 apiserver.go:52] "Watching apiserver"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.462919    3247 topology_manager.go:215] "Topology Admit Handler" podUID="d4909d89-2ca2-450b-8247-3c02fdf3a3b5" podNamespace="kube-system" podName="coredns-5dd5756b68-jdh5n"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.463106    3247 topology_manager.go:215] "Topology Admit Handler" podUID="1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d" podNamespace="kube-system" podName="kube-proxy-n97hp"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.479813    3247 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.505697    3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d-lib-modules\") pod \"kube-proxy-n97hp\" (UID: \"1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d\") " pod="kube-system/kube-proxy-n97hp"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.505779    3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d-xtables-lock\") pod \"kube-proxy-n97hp\" (UID: \"1e9c91c0-a1a4-47a8-8d7a-ef9bdff22c4d\") " pod="kube-system/kube-proxy-n97hp"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.764090    3247 scope.go:117] "RemoveContainer" containerID="2aab38cefaa11278837378fcfcbd1df9648308ab2a2df81da2208d34f8bcbc87"
	Nov 14 15:41:51 pause-584924 kubelet[3247]: I1114 15:41:51.766235    3247 scope.go:117] "RemoveContainer" containerID="33eecf61003ec1cb3e072f4408e56e6014061d0b12968c39d974a80cdb9c1c3b"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-584924 -n pause-584924
helpers_test.go:261: (dbg) Run:  kubectl --context pause-584924 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (107.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-490998 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-490998 --alsologtostderr -v=3: exit status 82 (2m1.509889645s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-490998"  ...
	* Stopping node "no-preload-490998"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:46:06.647322  875122 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:46:06.647483  875122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:46:06.647497  875122 out.go:309] Setting ErrFile to fd 2...
	I1114 15:46:06.647505  875122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:46:06.647695  875122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:46:06.647957  875122 out.go:303] Setting JSON to false
	I1114 15:46:06.648054  875122 mustload.go:65] Loading cluster: no-preload-490998
	I1114 15:46:06.648388  875122 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:46:06.648461  875122 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/config.json ...
	I1114 15:46:06.648636  875122 mustload.go:65] Loading cluster: no-preload-490998
	I1114 15:46:06.648772  875122 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:46:06.648818  875122 stop.go:39] StopHost: no-preload-490998
	I1114 15:46:06.649356  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:46:06.649418  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:46:06.665427  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I1114 15:46:06.665986  875122 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:46:06.666700  875122 main.go:141] libmachine: Using API Version  1
	I1114 15:46:06.666722  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:46:06.667111  875122 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:46:06.669681  875122 out.go:177] * Stopping node "no-preload-490998"  ...
	I1114 15:46:06.671310  875122 main.go:141] libmachine: Stopping "no-preload-490998"...
	I1114 15:46:06.671330  875122 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 15:46:06.673500  875122 main.go:141] libmachine: (no-preload-490998) Calling .Stop
	I1114 15:46:06.678009  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 0/60
	I1114 15:46:07.679553  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 1/60
	I1114 15:46:08.682005  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 2/60
	I1114 15:46:09.683628  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 3/60
	I1114 15:46:10.685671  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 4/60
	I1114 15:46:11.687058  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 5/60
	I1114 15:46:12.688860  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 6/60
	I1114 15:46:13.690627  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 7/60
	I1114 15:46:14.692131  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 8/60
	I1114 15:46:15.693590  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 9/60
	I1114 15:46:16.696093  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 10/60
	I1114 15:46:17.698034  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 11/60
	I1114 15:46:18.699769  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 12/60
	I1114 15:46:19.701637  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 13/60
	I1114 15:46:20.703456  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 14/60
	I1114 15:46:21.705493  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 15/60
	I1114 15:46:22.706909  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 16/60
	I1114 15:46:23.708717  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 17/60
	I1114 15:46:24.710871  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 18/60
	I1114 15:46:25.712240  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 19/60
	I1114 15:46:26.714444  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 20/60
	I1114 15:46:27.715994  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 21/60
	I1114 15:46:28.717629  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 22/60
	I1114 15:46:29.719162  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 23/60
	I1114 15:46:30.720481  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 24/60
	I1114 15:46:31.722220  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 25/60
	I1114 15:46:32.723728  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 26/60
	I1114 15:46:33.725149  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 27/60
	I1114 15:46:34.726700  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 28/60
	I1114 15:46:35.728260  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 29/60
	I1114 15:46:36.730452  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 30/60
	I1114 15:46:37.732006  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 31/60
	I1114 15:46:38.733568  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 32/60
	I1114 15:46:39.735643  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 33/60
	I1114 15:46:40.738027  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 34/60
	I1114 15:46:41.740140  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 35/60
	I1114 15:46:42.741653  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 36/60
	I1114 15:46:43.743051  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 37/60
	I1114 15:46:44.744597  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 38/60
	I1114 15:46:45.746251  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 39/60
	I1114 15:46:46.748603  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 40/60
	I1114 15:46:47.750139  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 41/60
	I1114 15:46:48.751791  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 42/60
	I1114 15:46:49.753432  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 43/60
	I1114 15:46:50.755072  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 44/60
	I1114 15:46:51.757593  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 45/60
	I1114 15:46:52.759353  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 46/60
	I1114 15:46:53.760913  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 47/60
	I1114 15:46:54.763269  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 48/60
	I1114 15:46:55.764947  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 49/60
	I1114 15:46:56.767407  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 50/60
	I1114 15:46:57.768839  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 51/60
	I1114 15:46:58.770365  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 52/60
	I1114 15:46:59.771715  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 53/60
	I1114 15:47:00.773393  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 54/60
	I1114 15:47:01.775452  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 55/60
	I1114 15:47:02.776864  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 56/60
	I1114 15:47:03.778417  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 57/60
	I1114 15:47:04.779649  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 58/60
	I1114 15:47:05.781200  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 59/60
	I1114 15:47:06.782327  875122 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1114 15:47:06.782402  875122 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:47:06.782425  875122 retry.go:31] will retry after 1.171414219s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:47:07.954761  875122 stop.go:39] StopHost: no-preload-490998
	I1114 15:47:07.955139  875122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:47:07.955182  875122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:47:07.969743  875122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I1114 15:47:07.970191  875122 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:47:07.970699  875122 main.go:141] libmachine: Using API Version  1
	I1114 15:47:07.970725  875122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:47:07.971125  875122 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:47:07.973265  875122 out.go:177] * Stopping node "no-preload-490998"  ...
	I1114 15:47:07.974708  875122 main.go:141] libmachine: Stopping "no-preload-490998"...
	I1114 15:47:07.974726  875122 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 15:47:07.976321  875122 main.go:141] libmachine: (no-preload-490998) Calling .Stop
	I1114 15:47:07.979879  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 0/60
	I1114 15:47:08.981419  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 1/60
	I1114 15:47:09.983069  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 2/60
	I1114 15:47:10.984635  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 3/60
	I1114 15:47:11.986181  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 4/60
	I1114 15:47:12.988012  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 5/60
	I1114 15:47:13.990547  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 6/60
	I1114 15:47:14.992272  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 7/60
	I1114 15:47:15.993708  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 8/60
	I1114 15:47:16.995938  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 9/60
	I1114 15:47:17.997919  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 10/60
	I1114 15:47:18.999277  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 11/60
	I1114 15:47:20.000787  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 12/60
	I1114 15:47:21.002277  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 13/60
	I1114 15:47:22.003807  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 14/60
	I1114 15:47:23.005707  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 15/60
	I1114 15:47:24.007475  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 16/60
	I1114 15:47:25.008820  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 17/60
	I1114 15:47:26.010453  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 18/60
	I1114 15:47:27.011985  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 19/60
	I1114 15:47:28.014224  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 20/60
	I1114 15:47:29.015813  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 21/60
	I1114 15:47:30.017372  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 22/60
	I1114 15:47:31.019156  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 23/60
	I1114 15:47:32.020820  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 24/60
	I1114 15:47:33.022820  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 25/60
	I1114 15:47:34.024326  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 26/60
	I1114 15:47:35.025968  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 27/60
	I1114 15:47:36.027436  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 28/60
	I1114 15:47:37.028971  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 29/60
	I1114 15:47:38.031052  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 30/60
	I1114 15:47:39.032851  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 31/60
	I1114 15:47:40.034430  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 32/60
	I1114 15:47:41.036060  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 33/60
	I1114 15:47:42.037528  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 34/60
	I1114 15:47:43.039615  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 35/60
	I1114 15:47:44.040970  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 36/60
	I1114 15:47:45.042526  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 37/60
	I1114 15:47:46.044267  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 38/60
	I1114 15:47:47.046346  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 39/60
	I1114 15:47:48.048382  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 40/60
	I1114 15:47:49.049871  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 41/60
	I1114 15:47:50.051680  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 42/60
	I1114 15:47:51.053184  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 43/60
	I1114 15:47:52.055528  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 44/60
	I1114 15:47:53.057551  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 45/60
	I1114 15:47:54.059234  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 46/60
	I1114 15:47:55.060846  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 47/60
	I1114 15:47:56.062377  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 48/60
	I1114 15:47:57.063823  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 49/60
	I1114 15:47:58.065856  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 50/60
	I1114 15:47:59.067480  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 51/60
	I1114 15:48:00.068836  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 52/60
	I1114 15:48:01.070288  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 53/60
	I1114 15:48:02.071667  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 54/60
	I1114 15:48:03.073670  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 55/60
	I1114 15:48:04.075280  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 56/60
	I1114 15:48:05.076954  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 57/60
	I1114 15:48:06.078410  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 58/60
	I1114 15:48:07.080016  875122 main.go:141] libmachine: (no-preload-490998) Waiting for machine to stop 59/60
	I1114 15:48:08.081128  875122 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1114 15:48:08.081186  875122 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:48:08.082908  875122 out.go:177] 
	W1114 15:48:08.084319  875122 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1114 15:48:08.084343  875122 out.go:239] * 
	* 
	W1114 15:48:08.090054  875122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 15:48:08.091411  875122 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-490998 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998: exit status 3 (18.512224903s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:48:26.605171  875814 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host
	E1114 15:48:26.605193  875814 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-490998" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-279880 --alsologtostderr -v=3
E1114 15:46:27.620441  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-279880 --alsologtostderr -v=3: exit status 82 (2m0.914896252s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-279880"  ...
	* Stopping node "embed-certs-279880"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:46:19.021602  875240 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:46:19.021920  875240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:46:19.021932  875240 out.go:309] Setting ErrFile to fd 2...
	I1114 15:46:19.021939  875240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:46:19.022182  875240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:46:19.022458  875240 out.go:303] Setting JSON to false
	I1114 15:46:19.022562  875240 mustload.go:65] Loading cluster: embed-certs-279880
	I1114 15:46:19.022981  875240 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:46:19.023073  875240 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/config.json ...
	I1114 15:46:19.023269  875240 mustload.go:65] Loading cluster: embed-certs-279880
	I1114 15:46:19.023405  875240 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:46:19.023458  875240 stop.go:39] StopHost: embed-certs-279880
	I1114 15:46:19.023890  875240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:46:19.023960  875240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:46:19.039869  875240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I1114 15:46:19.040357  875240 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:46:19.041054  875240 main.go:141] libmachine: Using API Version  1
	I1114 15:46:19.041084  875240 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:46:19.041452  875240 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:46:19.044099  875240 out.go:177] * Stopping node "embed-certs-279880"  ...
	I1114 15:46:19.046148  875240 main.go:141] libmachine: Stopping "embed-certs-279880"...
	I1114 15:46:19.046175  875240 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:46:19.048518  875240 main.go:141] libmachine: (embed-certs-279880) Calling .Stop
	I1114 15:46:19.052387  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 0/60
	I1114 15:46:20.054094  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 1/60
	I1114 15:46:21.056335  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 2/60
	I1114 15:46:22.057919  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 3/60
	I1114 15:46:23.059572  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 4/60
	I1114 15:46:24.061699  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 5/60
	I1114 15:46:25.063388  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 6/60
	I1114 15:46:26.065179  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 7/60
	I1114 15:46:27.067601  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 8/60
	I1114 15:46:28.069131  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 9/60
	I1114 15:46:29.070692  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 10/60
	I1114 15:46:30.073130  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 11/60
	I1114 15:46:31.074353  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 12/60
	I1114 15:46:32.075759  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 13/60
	I1114 15:46:33.077369  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 14/60
	I1114 15:46:34.079520  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 15/60
	I1114 15:46:35.081195  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 16/60
	I1114 15:46:36.082775  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 17/60
	I1114 15:46:37.084484  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 18/60
	I1114 15:46:38.085850  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 19/60
	I1114 15:46:39.087802  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 20/60
	I1114 15:46:40.089174  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 21/60
	I1114 15:46:41.090728  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 22/60
	I1114 15:46:42.092395  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 23/60
	I1114 15:46:43.094217  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 24/60
	I1114 15:46:44.096547  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 25/60
	I1114 15:46:45.098148  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 26/60
	I1114 15:46:46.099531  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 27/60
	I1114 15:46:47.101013  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 28/60
	I1114 15:46:48.102369  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 29/60
	I1114 15:46:49.104733  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 30/60
	I1114 15:46:50.106078  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 31/60
	I1114 15:46:51.107584  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 32/60
	I1114 15:46:52.108982  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 33/60
	I1114 15:46:53.110747  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 34/60
	I1114 15:46:54.112663  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 35/60
	I1114 15:46:55.114141  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 36/60
	I1114 15:46:56.115483  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 37/60
	I1114 15:46:57.116929  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 38/60
	I1114 15:46:58.118180  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 39/60
	I1114 15:46:59.120568  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 40/60
	I1114 15:47:00.121948  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 41/60
	I1114 15:47:01.123355  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 42/60
	I1114 15:47:02.125033  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 43/60
	I1114 15:47:03.127107  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 44/60
	I1114 15:47:04.128928  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 45/60
	I1114 15:47:05.130218  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 46/60
	I1114 15:47:06.131617  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 47/60
	I1114 15:47:07.132967  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 48/60
	I1114 15:47:08.135256  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 49/60
	I1114 15:47:09.137277  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 50/60
	I1114 15:47:10.139721  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 51/60
	I1114 15:47:11.141354  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 52/60
	I1114 15:47:12.142928  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 53/60
	I1114 15:47:13.144172  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 54/60
	I1114 15:47:14.146316  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 55/60
	I1114 15:47:15.147748  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 56/60
	I1114 15:47:16.149334  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 57/60
	I1114 15:47:17.151384  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 58/60
	I1114 15:47:18.152909  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 59/60
	I1114 15:47:19.154202  875240 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1114 15:47:19.154263  875240 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:47:19.154283  875240 retry.go:31] will retry after 584.495455ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:47:19.739007  875240 stop.go:39] StopHost: embed-certs-279880
	I1114 15:47:19.739469  875240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:47:19.739531  875240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:47:19.754408  875240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I1114 15:47:19.754904  875240 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:47:19.755405  875240 main.go:141] libmachine: Using API Version  1
	I1114 15:47:19.755441  875240 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:47:19.755774  875240 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:47:19.757843  875240 out.go:177] * Stopping node "embed-certs-279880"  ...
	I1114 15:47:19.759224  875240 main.go:141] libmachine: Stopping "embed-certs-279880"...
	I1114 15:47:19.759239  875240 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:47:19.760704  875240 main.go:141] libmachine: (embed-certs-279880) Calling .Stop
	I1114 15:47:19.763716  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 0/60
	I1114 15:47:20.765088  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 1/60
	I1114 15:47:21.767222  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 2/60
	I1114 15:47:22.768674  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 3/60
	I1114 15:47:23.770311  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 4/60
	I1114 15:47:24.772062  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 5/60
	I1114 15:47:25.773747  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 6/60
	I1114 15:47:26.775273  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 7/60
	I1114 15:47:27.776913  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 8/60
	I1114 15:47:28.778218  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 9/60
	I1114 15:47:29.780269  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 10/60
	I1114 15:47:30.781827  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 11/60
	I1114 15:47:31.783447  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 12/60
	I1114 15:47:32.785222  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 13/60
	I1114 15:47:33.786706  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 14/60
	I1114 15:47:34.789294  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 15/60
	I1114 15:47:35.790876  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 16/60
	I1114 15:47:36.792348  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 17/60
	I1114 15:47:37.793928  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 18/60
	I1114 15:47:38.795407  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 19/60
	I1114 15:47:39.797246  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 20/60
	I1114 15:47:40.798745  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 21/60
	I1114 15:47:41.800336  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 22/60
	I1114 15:47:42.802247  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 23/60
	I1114 15:47:43.804003  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 24/60
	I1114 15:47:44.805712  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 25/60
	I1114 15:47:45.807493  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 26/60
	I1114 15:47:46.808996  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 27/60
	I1114 15:47:47.810510  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 28/60
	I1114 15:47:48.811869  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 29/60
	I1114 15:47:49.813865  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 30/60
	I1114 15:47:50.815478  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 31/60
	I1114 15:47:51.817103  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 32/60
	I1114 15:47:52.819584  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 33/60
	I1114 15:47:53.821180  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 34/60
	I1114 15:47:54.822915  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 35/60
	I1114 15:47:55.824532  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 36/60
	I1114 15:47:56.826219  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 37/60
	I1114 15:47:57.827721  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 38/60
	I1114 15:47:58.829127  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 39/60
	I1114 15:47:59.831402  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 40/60
	I1114 15:48:00.832818  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 41/60
	I1114 15:48:01.834423  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 42/60
	I1114 15:48:02.835653  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 43/60
	I1114 15:48:03.837334  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 44/60
	I1114 15:48:04.839356  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 45/60
	I1114 15:48:05.840650  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 46/60
	I1114 15:48:06.842374  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 47/60
	I1114 15:48:07.843744  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 48/60
	I1114 15:48:08.845270  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 49/60
	I1114 15:48:09.847342  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 50/60
	I1114 15:48:10.848784  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 51/60
	I1114 15:48:11.850124  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 52/60
	I1114 15:48:12.851465  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 53/60
	I1114 15:48:13.853100  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 54/60
	I1114 15:48:14.854745  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 55/60
	I1114 15:48:15.856258  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 56/60
	I1114 15:48:16.857811  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 57/60
	I1114 15:48:17.859537  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 58/60
	I1114 15:48:18.860955  875240 main.go:141] libmachine: (embed-certs-279880) Waiting for machine to stop 59/60
	I1114 15:48:19.862430  875240 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1114 15:48:19.862568  875240 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:48:19.864767  875240 out.go:177] 
	W1114 15:48:19.866165  875240 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1114 15:48:19.866186  875240 out.go:239] * 
	* 
	W1114 15:48:19.871781  875240 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 15:48:19.873202  875240 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-279880 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880
E1114 15:48:22.912852  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:22.918140  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:22.928417  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:22.948710  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:22.989076  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:23.069440  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:23.229924  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:23.550735  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:24.191342  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:25.471526  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880: exit status 3 (18.506537051s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:48:38.381137  875885 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E1114 15:48:38.381165  875885 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-279880" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-842105 --alsologtostderr -v=3
E1114 15:46:41.495658  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:46.616585  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:56.857788  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-842105 --alsologtostderr -v=3: exit status 82 (2m1.3668491s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-842105"  ...
	* Stopping node "old-k8s-version-842105"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:46:40.005884  875417 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:46:40.006132  875417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:46:40.006143  875417 out.go:309] Setting ErrFile to fd 2...
	I1114 15:46:40.006147  875417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:46:40.006351  875417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:46:40.006603  875417 out.go:303] Setting JSON to false
	I1114 15:46:40.006697  875417 mustload.go:65] Loading cluster: old-k8s-version-842105
	I1114 15:46:40.007066  875417 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:46:40.007142  875417 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/config.json ...
	I1114 15:46:40.007361  875417 mustload.go:65] Loading cluster: old-k8s-version-842105
	I1114 15:46:40.007493  875417 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:46:40.007530  875417 stop.go:39] StopHost: old-k8s-version-842105
	I1114 15:46:40.007929  875417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:46:40.007986  875417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:46:40.023437  875417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33593
	I1114 15:46:40.024097  875417 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:46:40.024735  875417 main.go:141] libmachine: Using API Version  1
	I1114 15:46:40.024783  875417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:46:40.025171  875417 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:46:40.027831  875417 out.go:177] * Stopping node "old-k8s-version-842105"  ...
	I1114 15:46:40.029224  875417 main.go:141] libmachine: Stopping "old-k8s-version-842105"...
	I1114 15:46:40.029248  875417 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:46:40.031231  875417 main.go:141] libmachine: (old-k8s-version-842105) Calling .Stop
	I1114 15:46:40.034785  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 0/60
	I1114 15:46:41.036559  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 1/60
	I1114 15:46:42.038209  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 2/60
	I1114 15:46:43.040723  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 3/60
	I1114 15:46:44.042441  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 4/60
	I1114 15:46:45.044795  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 5/60
	I1114 15:46:46.046413  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 6/60
	I1114 15:46:47.048001  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 7/60
	I1114 15:46:48.049659  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 8/60
	I1114 15:46:49.051253  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 9/60
	I1114 15:46:50.052776  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 10/60
	I1114 15:46:51.054306  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 11/60
	I1114 15:46:52.055712  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 12/60
	I1114 15:46:53.057406  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 13/60
	I1114 15:46:54.059100  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 14/60
	I1114 15:46:55.061220  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 15/60
	I1114 15:46:56.062745  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 16/60
	I1114 15:46:57.065134  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 17/60
	I1114 15:46:58.066682  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 18/60
	I1114 15:46:59.068011  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 19/60
	I1114 15:47:00.070394  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 20/60
	I1114 15:47:01.071723  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 21/60
	I1114 15:47:02.073487  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 22/60
	I1114 15:47:03.074997  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 23/60
	I1114 15:47:04.076775  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 24/60
	I1114 15:47:05.078815  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 25/60
	I1114 15:47:06.080082  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 26/60
	I1114 15:47:07.081809  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 27/60
	I1114 15:47:08.083284  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 28/60
	I1114 15:47:09.085687  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 29/60
	I1114 15:47:10.088090  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 30/60
	I1114 15:47:11.089663  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 31/60
	I1114 15:47:12.091015  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 32/60
	I1114 15:47:13.092602  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 33/60
	I1114 15:47:14.094233  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 34/60
	I1114 15:47:15.096346  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 35/60
	I1114 15:47:16.097908  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 36/60
	I1114 15:47:17.099350  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 37/60
	I1114 15:47:18.100875  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 38/60
	I1114 15:47:19.102370  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 39/60
	I1114 15:47:20.104812  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 40/60
	I1114 15:47:21.106251  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 41/60
	I1114 15:47:22.107938  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 42/60
	I1114 15:47:23.109645  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 43/60
	I1114 15:47:24.111127  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 44/60
	I1114 15:47:25.113564  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 45/60
	I1114 15:47:26.115031  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 46/60
	I1114 15:47:27.117047  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 47/60
	I1114 15:47:28.118360  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 48/60
	I1114 15:47:29.119895  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 49/60
	I1114 15:47:30.122063  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 50/60
	I1114 15:47:31.123568  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 51/60
	I1114 15:47:32.125142  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 52/60
	I1114 15:47:33.126739  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 53/60
	I1114 15:47:34.128275  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 54/60
	I1114 15:47:35.130734  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 55/60
	I1114 15:47:36.132067  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 56/60
	I1114 15:47:37.133419  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 57/60
	I1114 15:47:38.135084  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 58/60
	I1114 15:47:39.136581  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 59/60
	I1114 15:47:40.137850  875417 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1114 15:47:40.137900  875417 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:47:40.137918  875417 retry.go:31] will retry after 1.028376558s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:47:41.167094  875417 stop.go:39] StopHost: old-k8s-version-842105
	I1114 15:47:41.167694  875417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:47:41.167761  875417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:47:41.182502  875417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43361
	I1114 15:47:41.183015  875417 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:47:41.183500  875417 main.go:141] libmachine: Using API Version  1
	I1114 15:47:41.183530  875417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:47:41.183899  875417 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:47:41.186281  875417 out.go:177] * Stopping node "old-k8s-version-842105"  ...
	I1114 15:47:41.187764  875417 main.go:141] libmachine: Stopping "old-k8s-version-842105"...
	I1114 15:47:41.187791  875417 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:47:41.189777  875417 main.go:141] libmachine: (old-k8s-version-842105) Calling .Stop
	I1114 15:47:41.193827  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 0/60
	I1114 15:47:42.195399  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 1/60
	I1114 15:47:43.197043  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 2/60
	I1114 15:47:44.198685  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 3/60
	I1114 15:47:45.200389  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 4/60
	I1114 15:47:46.202475  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 5/60
	I1114 15:47:47.204199  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 6/60
	I1114 15:47:48.205635  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 7/60
	I1114 15:47:49.207180  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 8/60
	I1114 15:47:50.208671  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 9/60
	I1114 15:47:51.210581  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 10/60
	I1114 15:47:52.212319  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 11/60
	I1114 15:47:53.213937  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 12/60
	I1114 15:47:54.215366  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 13/60
	I1114 15:47:55.217001  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 14/60
	I1114 15:47:56.218888  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 15/60
	I1114 15:47:57.220356  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 16/60
	I1114 15:47:58.221786  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 17/60
	I1114 15:47:59.223142  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 18/60
	I1114 15:48:00.224716  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 19/60
	I1114 15:48:01.226901  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 20/60
	I1114 15:48:02.228449  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 21/60
	I1114 15:48:03.230088  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 22/60
	I1114 15:48:04.231652  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 23/60
	I1114 15:48:05.233345  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 24/60
	I1114 15:48:06.235377  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 25/60
	I1114 15:48:07.237000  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 26/60
	I1114 15:48:08.238204  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 27/60
	I1114 15:48:09.240606  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 28/60
	I1114 15:48:10.242163  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 29/60
	I1114 15:48:11.244104  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 30/60
	I1114 15:48:12.245795  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 31/60
	I1114 15:48:13.247160  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 32/60
	I1114 15:48:14.248680  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 33/60
	I1114 15:48:15.250220  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 34/60
	I1114 15:48:16.252393  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 35/60
	I1114 15:48:17.253922  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 36/60
	I1114 15:48:18.255581  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 37/60
	I1114 15:48:19.257215  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 38/60
	I1114 15:48:20.258940  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 39/60
	I1114 15:48:21.260994  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 40/60
	I1114 15:48:22.262544  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 41/60
	I1114 15:48:23.264128  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 42/60
	I1114 15:48:24.265565  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 43/60
	I1114 15:48:25.267081  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 44/60
	I1114 15:48:26.269147  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 45/60
	I1114 15:48:27.270831  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 46/60
	I1114 15:48:28.272373  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 47/60
	I1114 15:48:29.274033  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 48/60
	I1114 15:48:30.275441  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 49/60
	I1114 15:48:31.277516  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 50/60
	I1114 15:48:32.278793  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 51/60
	I1114 15:48:33.280419  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 52/60
	I1114 15:48:34.281996  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 53/60
	I1114 15:48:35.283645  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 54/60
	I1114 15:48:36.285710  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 55/60
	I1114 15:48:37.287321  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 56/60
	I1114 15:48:38.288647  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 57/60
	I1114 15:48:39.290044  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 58/60
	I1114 15:48:40.291536  875417 main.go:141] libmachine: (old-k8s-version-842105) Waiting for machine to stop 59/60
	I1114 15:48:41.292616  875417 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1114 15:48:41.292679  875417 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:48:41.294606  875417 out.go:177] 
	W1114 15:48:41.296108  875417 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1114 15:48:41.296124  875417 out.go:239] * 
	* 
	W1114 15:48:41.301694  875417 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 15:48:41.303058  875417 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-842105 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105: exit status 3 (18.579847017s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:48:59.885132  876117 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host
	E1114 15:48:59.885160  876117 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-842105" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-529430 --alsologtostderr -v=3
E1114 15:47:21.221688  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:21.226973  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:21.237230  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:21.257513  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:21.297815  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:21.378140  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:21.538642  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:21.859748  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:22.500837  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:23.781649  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:26.342581  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:31.463471  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:41.703711  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:47:58.299139  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:48:02.184034  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-529430 --alsologtostderr -v=3: exit status 82 (2m1.259660931s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-529430"  ...
	* Stopping node "default-k8s-diff-port-529430"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:47:17.793808  875655 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:47:17.794123  875655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:47:17.794133  875655 out.go:309] Setting ErrFile to fd 2...
	I1114 15:47:17.794138  875655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:47:17.794359  875655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:47:17.794640  875655 out.go:303] Setting JSON to false
	I1114 15:47:17.794749  875655 mustload.go:65] Loading cluster: default-k8s-diff-port-529430
	I1114 15:47:17.795114  875655 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:47:17.795199  875655 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:47:17.795388  875655 mustload.go:65] Loading cluster: default-k8s-diff-port-529430
	I1114 15:47:17.795526  875655 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:47:17.795567  875655 stop.go:39] StopHost: default-k8s-diff-port-529430
	I1114 15:47:17.796070  875655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:47:17.796133  875655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:47:17.811239  875655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I1114 15:47:17.811773  875655 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:47:17.812381  875655 main.go:141] libmachine: Using API Version  1
	I1114 15:47:17.812405  875655 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:47:17.812772  875655 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:47:17.815394  875655 out.go:177] * Stopping node "default-k8s-diff-port-529430"  ...
	I1114 15:47:17.816863  875655 main.go:141] libmachine: Stopping "default-k8s-diff-port-529430"...
	I1114 15:47:17.816883  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:47:17.818380  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Stop
	I1114 15:47:17.822214  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 0/60
	I1114 15:47:18.823959  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 1/60
	I1114 15:47:19.825606  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 2/60
	I1114 15:47:20.827088  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 3/60
	I1114 15:47:21.828453  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 4/60
	I1114 15:47:22.830877  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 5/60
	I1114 15:47:23.832311  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 6/60
	I1114 15:47:24.833935  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 7/60
	I1114 15:47:25.835218  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 8/60
	I1114 15:47:26.836809  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 9/60
	I1114 15:47:27.838071  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 10/60
	I1114 15:47:28.839486  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 11/60
	I1114 15:47:29.840986  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 12/60
	I1114 15:47:30.842367  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 13/60
	I1114 15:47:31.843823  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 14/60
	I1114 15:47:32.846040  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 15/60
	I1114 15:47:33.847372  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 16/60
	I1114 15:47:34.848634  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 17/60
	I1114 15:47:35.850258  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 18/60
	I1114 15:47:36.851588  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 19/60
	I1114 15:47:37.853836  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 20/60
	I1114 15:47:38.855793  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 21/60
	I1114 15:47:39.857243  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 22/60
	I1114 15:47:40.858954  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 23/60
	I1114 15:47:41.860774  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 24/60
	I1114 15:47:42.863342  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 25/60
	I1114 15:47:43.865058  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 26/60
	I1114 15:47:44.866581  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 27/60
	I1114 15:47:45.868029  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 28/60
	I1114 15:47:46.869650  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 29/60
	I1114 15:47:47.871128  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 30/60
	I1114 15:47:48.872569  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 31/60
	I1114 15:47:49.874061  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 32/60
	I1114 15:47:50.875228  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 33/60
	I1114 15:47:51.876785  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 34/60
	I1114 15:47:52.879024  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 35/60
	I1114 15:47:53.880214  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 36/60
	I1114 15:47:54.881427  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 37/60
	I1114 15:47:55.882879  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 38/60
	I1114 15:47:56.884641  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 39/60
	I1114 15:47:57.886082  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 40/60
	I1114 15:47:58.887369  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 41/60
	I1114 15:47:59.888931  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 42/60
	I1114 15:48:00.890363  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 43/60
	I1114 15:48:01.891701  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 44/60
	I1114 15:48:02.893786  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 45/60
	I1114 15:48:03.895090  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 46/60
	I1114 15:48:04.896842  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 47/60
	I1114 15:48:05.898417  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 48/60
	I1114 15:48:06.899838  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 49/60
	I1114 15:48:07.901905  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 50/60
	I1114 15:48:08.903227  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 51/60
	I1114 15:48:09.904940  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 52/60
	I1114 15:48:10.906358  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 53/60
	I1114 15:48:11.907713  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 54/60
	I1114 15:48:12.909871  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 55/60
	I1114 15:48:13.911105  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 56/60
	I1114 15:48:14.912290  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 57/60
	I1114 15:48:15.913651  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 58/60
	I1114 15:48:16.915312  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 59/60
	I1114 15:48:17.915645  875655 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1114 15:48:17.915762  875655 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:48:17.915797  875655 retry.go:31] will retry after 940.343144ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:48:18.856908  875655 stop.go:39] StopHost: default-k8s-diff-port-529430
	I1114 15:48:18.857582  875655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:48:18.857660  875655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:48:18.873037  875655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35353
	I1114 15:48:18.873619  875655 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:48:18.874101  875655 main.go:141] libmachine: Using API Version  1
	I1114 15:48:18.874130  875655 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:48:18.874500  875655 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:48:18.876786  875655 out.go:177] * Stopping node "default-k8s-diff-port-529430"  ...
	I1114 15:48:18.878316  875655 main.go:141] libmachine: Stopping "default-k8s-diff-port-529430"...
	I1114 15:48:18.878338  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:48:18.880017  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Stop
	I1114 15:48:18.883219  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 0/60
	I1114 15:48:19.884506  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 1/60
	I1114 15:48:20.886140  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 2/60
	I1114 15:48:21.887899  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 3/60
	I1114 15:48:22.889452  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 4/60
	I1114 15:48:23.891731  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 5/60
	I1114 15:48:24.893318  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 6/60
	I1114 15:48:25.894826  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 7/60
	I1114 15:48:26.896371  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 8/60
	I1114 15:48:27.897934  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 9/60
	I1114 15:48:28.900091  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 10/60
	I1114 15:48:29.901306  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 11/60
	I1114 15:48:30.902966  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 12/60
	I1114 15:48:31.904583  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 13/60
	I1114 15:48:32.905762  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 14/60
	I1114 15:48:33.907766  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 15/60
	I1114 15:48:34.909196  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 16/60
	I1114 15:48:35.910754  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 17/60
	I1114 15:48:36.912063  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 18/60
	I1114 15:48:37.913911  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 19/60
	I1114 15:48:38.916157  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 20/60
	I1114 15:48:39.917656  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 21/60
	I1114 15:48:40.919287  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 22/60
	I1114 15:48:41.920997  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 23/60
	I1114 15:48:42.922495  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 24/60
	I1114 15:48:43.924396  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 25/60
	I1114 15:48:44.925799  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 26/60
	I1114 15:48:45.927276  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 27/60
	I1114 15:48:46.928854  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 28/60
	I1114 15:48:47.930302  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 29/60
	I1114 15:48:48.932326  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 30/60
	I1114 15:48:49.933710  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 31/60
	I1114 15:48:50.935127  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 32/60
	I1114 15:48:51.936642  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 33/60
	I1114 15:48:52.938426  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 34/60
	I1114 15:48:53.940285  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 35/60
	I1114 15:48:54.941827  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 36/60
	I1114 15:48:55.943061  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 37/60
	I1114 15:48:56.944724  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 38/60
	I1114 15:48:57.946306  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 39/60
	I1114 15:48:58.947994  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 40/60
	I1114 15:48:59.949524  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 41/60
	I1114 15:49:00.951009  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 42/60
	I1114 15:49:01.952668  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 43/60
	I1114 15:49:02.954024  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 44/60
	I1114 15:49:03.955832  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 45/60
	I1114 15:49:04.957575  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 46/60
	I1114 15:49:05.958913  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 47/60
	I1114 15:49:06.960571  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 48/60
	I1114 15:49:07.961953  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 49/60
	I1114 15:49:08.964008  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 50/60
	I1114 15:49:09.965572  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 51/60
	I1114 15:49:10.967353  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 52/60
	I1114 15:49:11.968764  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 53/60
	I1114 15:49:12.970249  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 54/60
	I1114 15:49:13.972387  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 55/60
	I1114 15:49:14.973725  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 56/60
	I1114 15:49:15.975173  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 57/60
	I1114 15:49:16.976480  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 58/60
	I1114 15:49:17.978067  875655 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for machine to stop 59/60
	I1114 15:49:18.979151  875655 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1114 15:49:18.979202  875655 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1114 15:49:18.981181  875655 out.go:177] 
	W1114 15:49:18.982842  875655 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1114 15:49:18.982861  875655 out.go:239] * 
	* 
	W1114 15:49:18.988539  875655 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 15:49:18.990840  875655 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-529430 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
E1114 15:49:20.220265  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:49:29.653695  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:49:34.571030  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430: exit status 3 (18.524861214s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:49:37.517073  876442 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host
	E1114 15:49:37.517100  876442 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-529430" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998
E1114 15:48:28.032403  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998: exit status 3 (3.199862805s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:48:29.805057  875934 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host
	E1114 15:48:29.805074  875934 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-490998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1114 15:48:33.153600  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:48:35.714319  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-490998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15599085s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-490998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998: exit status 3 (3.060041242s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:48:39.021150  876005 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host
	E1114 15:48:39.021173  876005 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.251:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-490998" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880
E1114 15:48:39.004627  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880: exit status 3 (3.168297495s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:48:41.549146  876035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E1114 15:48:41.549172  876035 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-279880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1114 15:48:43.145010  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:48:43.394526  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-279880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15590708s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-279880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880
E1114 15:48:48.691366  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:48.696708  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:48.707012  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:48.727316  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:48.767631  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:48.848052  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:49.008762  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:49.329438  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:49.970030  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880: exit status 3 (3.059353433s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:48:50.765181  876176 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E1114 15:48:50.765200  876176 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-279880" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105: exit status 3 (3.199736427s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:49:03.085209  876276 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host
	E1114 15:49:03.085231  876276 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-842105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1114 15:49:03.848889  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:49:03.875117  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:49:09.172440  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-842105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156638914s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-842105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105: exit status 3 (3.05923716s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:49:12.301238  876349 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host
	E1114 15:49:12.301265  876349 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-842105" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
E1114 15:49:39.652556  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:39.657857  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:39.668148  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:39.688532  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:39.728829  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:39.809215  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:39.969708  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:40.290635  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430: exit status 3 (3.199658798s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:49:40.717160  876538 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host
	E1114 15:49:40.717191  876538 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-529430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1114 15:49:40.930869  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:42.211722  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:44.772430  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:49:44.835690  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-529430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156966276s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-529430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
E1114 15:49:49.893257  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430: exit status 3 (3.058837883s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 15:49:49.933139  876627 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host
	E1114 15:49:49.933166  876627 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.196:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-529430" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-279880 -n embed-certs-279880
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-14 16:07:59.725839928 +0000 UTC m=+5347.256024897
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-279880 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-279880 logs -n 25: (1.650015353s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-331502 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | disable-driver-mounts-331502                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:47 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-490998             | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-279880            | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-842105        | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-529430  | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC | 14 Nov 23 15:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC |                     |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-490998                  | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-279880                 | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 15:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-842105             | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-529430       | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 15:59 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:49:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:49:49.997953  876668 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:49:49.998137  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998147  876668 out.go:309] Setting ErrFile to fd 2...
	I1114 15:49:49.998152  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998369  876668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:49:49.998978  876668 out.go:303] Setting JSON to false
	I1114 15:49:50.000072  876668 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":45142,"bootTime":1699931848,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:49:50.000141  876668 start.go:138] virtualization: kvm guest
	I1114 15:49:50.002690  876668 out.go:177] * [default-k8s-diff-port-529430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:49:50.004392  876668 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:49:50.004396  876668 notify.go:220] Checking for updates...
	I1114 15:49:50.006193  876668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:49:50.007844  876668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:49:50.009232  876668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:49:50.010572  876668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:49:50.011857  876668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:49:50.013604  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:49:50.014059  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.014149  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.028903  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I1114 15:49:50.029290  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.029869  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.029892  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.030244  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.030474  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.030753  876668 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:49:50.031049  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.031096  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.045696  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I1114 15:49:50.046117  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.046625  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.046658  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.047069  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.047303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.082731  876668 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:49:50.084362  876668 start.go:298] selected driver: kvm2
	I1114 15:49:50.084384  876668 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.084517  876668 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:49:50.085533  876668 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.085625  876668 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:49:50.100834  876668 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:49:50.101226  876668 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:49:50.101308  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:49:50.101328  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:49:50.101342  876668 start_flags.go:323] config:
	{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-52943
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.101540  876668 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.103413  876668 out.go:177] * Starting control plane node default-k8s-diff-port-529430 in cluster default-k8s-diff-port-529430
	I1114 15:49:49.196989  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:52.269051  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:50.104763  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:49:50.104815  876668 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:49:50.104835  876668 cache.go:56] Caching tarball of preloaded images
	I1114 15:49:50.104932  876668 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:49:50.104946  876668 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:49:50.105089  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:49:50.105307  876668 start.go:365] acquiring machines lock for default-k8s-diff-port-529430: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:49:58.349061  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:01.421017  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:07.501030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:10.573058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:16.653093  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:19.725040  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:25.805047  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:28.877039  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:34.957084  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:38.029008  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:44.109068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:47.181018  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:53.261065  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:56.333048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:02.413048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:05.485063  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:11.565034  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:14.636996  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:20.717050  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:23.789097  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:29.869058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:32.941066  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:39.021029  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:42.093064  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:48.173074  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:51.245007  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:57.325014  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:00.397111  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:06.477052  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:09.549016  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:15.629105  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:18.701000  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:24.781004  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:27.853046  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:33.933030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:37.005067  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:43.085068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:46.157044  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:52.237056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:55.309080  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:01.389056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:04.461005  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:10.541083  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:13.613033  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:16.617368  876220 start.go:369] acquired machines lock for "embed-certs-279880" in 4m25.691009916s
	I1114 15:53:16.617492  876220 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:16.617500  876220 fix.go:54] fixHost starting: 
	I1114 15:53:16.617993  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:16.618029  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:16.633223  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1114 15:53:16.633787  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:16.634385  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:53:16.634412  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:16.634743  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:16.634958  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:16.635120  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:53:16.636933  876220 fix.go:102] recreateIfNeeded on embed-certs-279880: state=Stopped err=<nil>
	I1114 15:53:16.636967  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	W1114 15:53:16.637164  876220 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:16.638727  876220 out.go:177] * Restarting existing kvm2 VM for "embed-certs-279880" ...
	I1114 15:53:16.615062  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:16.615116  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:53:16.617147  876065 machine.go:91] provisioned docker machine in 4m37.399136623s
	I1114 15:53:16.617196  876065 fix.go:56] fixHost completed within 4m37.422027817s
	I1114 15:53:16.617203  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 4m37.422123699s
	W1114 15:53:16.617282  876065 start.go:691] error starting host: provision: host is not running
	W1114 15:53:16.617491  876065 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1114 15:53:16.617502  876065 start.go:706] Will try again in 5 seconds ...
	I1114 15:53:16.640137  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Start
	I1114 15:53:16.640330  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring networks are active...
	I1114 15:53:16.641029  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network default is active
	I1114 15:53:16.641386  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network mk-embed-certs-279880 is active
	I1114 15:53:16.641738  876220 main.go:141] libmachine: (embed-certs-279880) Getting domain xml...
	I1114 15:53:16.642488  876220 main.go:141] libmachine: (embed-certs-279880) Creating domain...
	I1114 15:53:17.858298  876220 main.go:141] libmachine: (embed-certs-279880) Waiting to get IP...
	I1114 15:53:17.859506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:17.859912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:17.860039  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:17.859881  877182 retry.go:31] will retry after 225.269159ms: waiting for machine to come up
	I1114 15:53:18.086611  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.087099  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.087135  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.087062  877182 retry.go:31] will retry after 322.840106ms: waiting for machine to come up
	I1114 15:53:18.411781  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.412238  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.412278  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.412127  877182 retry.go:31] will retry after 459.77881ms: waiting for machine to come up
	I1114 15:53:18.873994  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.874393  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.874433  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.874341  877182 retry.go:31] will retry after 460.123636ms: waiting for machine to come up
	I1114 15:53:19.335916  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.336488  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.336520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.336414  877182 retry.go:31] will retry after 526.141665ms: waiting for machine to come up
	I1114 15:53:19.864336  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.864830  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.864856  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.864766  877182 retry.go:31] will retry after 817.261813ms: waiting for machine to come up
	I1114 15:53:20.683806  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:20.684289  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:20.684309  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:20.684244  877182 retry.go:31] will retry after 1.026381849s: waiting for machine to come up
	I1114 15:53:21.619196  876065 start.go:365] acquiring machines lock for no-preload-490998: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:53:21.712760  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:21.713237  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:21.713263  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:21.713201  877182 retry.go:31] will retry after 1.088672482s: waiting for machine to come up
	I1114 15:53:22.803222  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:22.803698  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:22.803734  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:22.803639  877182 retry.go:31] will retry after 1.394534659s: waiting for machine to come up
	I1114 15:53:24.199372  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:24.199764  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:24.199794  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:24.199706  877182 retry.go:31] will retry after 1.511094366s: waiting for machine to come up
	I1114 15:53:25.713650  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:25.714062  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:25.714107  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:25.713980  877182 retry.go:31] will retry after 1.821074261s: waiting for machine to come up
	I1114 15:53:27.536875  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:27.537423  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:27.537458  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:27.537349  877182 retry.go:31] will retry after 2.856840662s: waiting for machine to come up
	I1114 15:53:30.395562  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:30.396059  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:30.396086  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:30.396007  877182 retry.go:31] will retry after 4.003431067s: waiting for machine to come up
	I1114 15:53:35.689894  876396 start.go:369] acquired machines lock for "old-k8s-version-842105" in 4m23.221865246s
	I1114 15:53:35.689964  876396 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:35.689973  876396 fix.go:54] fixHost starting: 
	I1114 15:53:35.690410  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:35.690446  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:35.709418  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I1114 15:53:35.709816  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:35.710366  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:53:35.710400  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:35.710760  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:35.710946  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:35.711101  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:53:35.712666  876396 fix.go:102] recreateIfNeeded on old-k8s-version-842105: state=Stopped err=<nil>
	I1114 15:53:35.712696  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	W1114 15:53:35.712882  876396 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:35.715357  876396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-842105" ...
	I1114 15:53:34.403163  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.403706  876220 main.go:141] libmachine: (embed-certs-279880) Found IP for machine: 192.168.39.147
	I1114 15:53:34.403737  876220 main.go:141] libmachine: (embed-certs-279880) Reserving static IP address...
	I1114 15:53:34.403757  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has current primary IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.404290  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.404318  876220 main.go:141] libmachine: (embed-certs-279880) DBG | skip adding static IP to network mk-embed-certs-279880 - found existing host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"}
	I1114 15:53:34.404331  876220 main.go:141] libmachine: (embed-certs-279880) Reserved static IP address: 192.168.39.147
	I1114 15:53:34.404343  876220 main.go:141] libmachine: (embed-certs-279880) Waiting for SSH to be available...
	I1114 15:53:34.404351  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Getting to WaitForSSH function...
	I1114 15:53:34.406833  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407219  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.407248  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407367  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH client type: external
	I1114 15:53:34.407412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa (-rw-------)
	I1114 15:53:34.407481  876220 main.go:141] libmachine: (embed-certs-279880) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:34.407525  876220 main.go:141] libmachine: (embed-certs-279880) DBG | About to run SSH command:
	I1114 15:53:34.407551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | exit 0
	I1114 15:53:34.504225  876220 main.go:141] libmachine: (embed-certs-279880) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:34.504696  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetConfigRaw
	I1114 15:53:34.505414  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.508202  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.508632  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.508685  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.509034  876220 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/config.json ...
	I1114 15:53:34.509282  876220 machine.go:88] provisioning docker machine ...
	I1114 15:53:34.509309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:34.509521  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509730  876220 buildroot.go:166] provisioning hostname "embed-certs-279880"
	I1114 15:53:34.509758  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509894  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.511987  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512285  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.512317  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512472  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.512629  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512751  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512925  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.513118  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.513578  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.513594  876220 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-279880 && echo "embed-certs-279880" | sudo tee /etc/hostname
	I1114 15:53:34.664546  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-279880
	
	I1114 15:53:34.664595  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.667537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.667908  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.667941  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.668142  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.668388  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668631  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668910  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.669142  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.669652  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.669684  876220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-279880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-279880/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-279880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:34.810710  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:34.810745  876220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:34.810768  876220 buildroot.go:174] setting up certificates
	I1114 15:53:34.810780  876220 provision.go:83] configureAuth start
	I1114 15:53:34.810788  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.811138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.814056  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.814537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814747  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.817131  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817513  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.817544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817675  876220 provision.go:138] copyHostCerts
	I1114 15:53:34.817774  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:34.817789  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:34.817869  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:34.817990  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:34.818006  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:34.818039  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:34.818117  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:34.818129  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:34.818161  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:34.818226  876220 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.embed-certs-279880 san=[192.168.39.147 192.168.39.147 localhost 127.0.0.1 minikube embed-certs-279880]
	I1114 15:53:34.925955  876220 provision.go:172] copyRemoteCerts
	I1114 15:53:34.926014  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:34.926039  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.928954  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929322  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.929346  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929520  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.929703  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.929866  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.930033  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.026199  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:35.049682  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1114 15:53:35.072415  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:53:35.097200  876220 provision.go:86] duration metric: configureAuth took 286.405404ms
	I1114 15:53:35.097226  876220 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:35.097425  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:53:35.097558  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.100561  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.100912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.100965  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.101091  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.101296  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101500  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101641  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.101795  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.102165  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.102198  876220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:35.411682  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:35.411719  876220 machine.go:91] provisioned docker machine in 902.419916ms
	I1114 15:53:35.411733  876220 start.go:300] post-start starting for "embed-certs-279880" (driver="kvm2")
	I1114 15:53:35.411748  876220 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:35.411770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.412161  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:35.412201  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.415071  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.415551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.415849  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.416000  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.416143  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.512565  876220 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:35.517087  876220 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:35.517146  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:35.517235  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:35.517356  876220 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:35.517511  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:35.527797  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:35.552798  876220 start.go:303] post-start completed in 141.045785ms
	I1114 15:53:35.552827  876220 fix.go:56] fixHost completed within 18.935326604s
	I1114 15:53:35.552855  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.555540  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.555930  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.555970  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.556155  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.556390  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556573  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.557007  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.557338  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.557348  876220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:35.689729  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977215.639237319
	
	I1114 15:53:35.689759  876220 fix.go:206] guest clock: 1699977215.639237319
	I1114 15:53:35.689769  876220 fix.go:219] Guest: 2023-11-14 15:53:35.639237319 +0000 UTC Remote: 2023-11-14 15:53:35.552830911 +0000 UTC m=+284.779127994 (delta=86.406408ms)
	I1114 15:53:35.689801  876220 fix.go:190] guest clock delta is within tolerance: 86.406408ms
	I1114 15:53:35.689807  876220 start.go:83] releasing machines lock for "embed-certs-279880", held for 19.072338997s
	I1114 15:53:35.689842  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.690197  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:35.692786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693260  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.693311  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693440  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694011  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694222  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694338  876220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:35.694404  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.694455  876220 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:35.694484  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.697198  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697220  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697702  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697732  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697771  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697865  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698085  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698088  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698297  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698303  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698438  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698562  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.698974  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.813318  876220 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:35.819124  876220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:35.957038  876220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:35.964876  876220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:35.964984  876220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:35.980758  876220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:35.980780  876220 start.go:472] detecting cgroup driver to use...
	I1114 15:53:35.980848  876220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:35.993968  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:36.006564  876220 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:36.006626  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:36.021314  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:36.035842  876220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:36.147617  876220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:36.268025  876220 docker.go:219] disabling docker service ...
	I1114 15:53:36.268113  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:36.280847  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:36.292659  876220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:36.414923  876220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:36.534481  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:36.547652  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:36.565229  876220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:53:36.565312  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.574949  876220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:36.575030  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.585105  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.594790  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.603613  876220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:36.613116  876220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:36.620828  876220 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:36.620884  876220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:36.632600  876220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:36.642150  876220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:36.756773  876220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:36.929381  876220 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:36.929467  876220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:36.934735  876220 start.go:540] Will wait 60s for crictl version
	I1114 15:53:36.934790  876220 ssh_runner.go:195] Run: which crictl
	I1114 15:53:36.940182  876220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:36.991630  876220 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:36.991718  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.045160  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.097281  876220 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:53:35.716835  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Start
	I1114 15:53:35.716987  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring networks are active...
	I1114 15:53:35.717715  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network default is active
	I1114 15:53:35.718030  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network mk-old-k8s-version-842105 is active
	I1114 15:53:35.718429  876396 main.go:141] libmachine: (old-k8s-version-842105) Getting domain xml...
	I1114 15:53:35.719055  876396 main.go:141] libmachine: (old-k8s-version-842105) Creating domain...
	I1114 15:53:36.991860  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting to get IP...
	I1114 15:53:36.992911  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:36.993376  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:36.993427  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:36.993318  877295 retry.go:31] will retry after 227.553321ms: waiting for machine to come up
	I1114 15:53:37.223023  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.223561  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.223629  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.223511  877295 retry.go:31] will retry after 308.951372ms: waiting for machine to come up
	I1114 15:53:37.098693  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:37.102205  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102676  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:37.102710  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102955  876220 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:37.107113  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:37.120009  876220 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:53:37.120075  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:37.160178  876220 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:53:37.160241  876220 ssh_runner.go:195] Run: which lz4
	I1114 15:53:37.164351  876220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:53:37.168645  876220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:53:37.168684  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:53:39.026796  876220 crio.go:444] Took 1.862508 seconds to copy over tarball
	I1114 15:53:39.026876  876220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:53:37.534243  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.534797  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.534827  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.534774  877295 retry.go:31] will retry after 440.76682ms: waiting for machine to come up
	I1114 15:53:37.977712  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.978257  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.978287  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.978207  877295 retry.go:31] will retry after 402.601155ms: waiting for machine to come up
	I1114 15:53:38.383001  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.383515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.383551  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.383468  877295 retry.go:31] will retry after 580.977501ms: waiting for machine to come up
	I1114 15:53:38.966457  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.967088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.967121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.967026  877295 retry.go:31] will retry after 679.65563ms: waiting for machine to come up
	I1114 15:53:39.648086  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:39.648560  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:39.648593  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:39.648501  877295 retry.go:31] will retry after 1.014858956s: waiting for machine to come up
	I1114 15:53:40.664728  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:40.665285  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:40.665321  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:40.665230  877295 retry.go:31] will retry after 1.035036164s: waiting for machine to come up
	I1114 15:53:41.701639  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:41.702088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:41.702123  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:41.702029  877295 retry.go:31] will retry after 1.15711647s: waiting for machine to come up
	I1114 15:53:41.885259  876220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.858355323s)
	I1114 15:53:41.885288  876220 crio.go:451] Took 2.858463 seconds to extract the tarball
	I1114 15:53:41.885300  876220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:53:41.926498  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:41.972943  876220 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:53:41.972980  876220 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:53:41.973053  876220 ssh_runner.go:195] Run: crio config
	I1114 15:53:42.038006  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:53:42.038032  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:53:42.038053  876220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:53:42.038071  876220 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-279880 NodeName:embed-certs-279880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:53:42.038234  876220 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-279880"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:53:42.038323  876220 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-279880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:53:42.038394  876220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:53:42.050379  876220 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:53:42.050462  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:53:42.058921  876220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1114 15:53:42.074304  876220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:53:42.090403  876220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1114 15:53:42.106412  876220 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I1114 15:53:42.109907  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:42.122915  876220 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880 for IP: 192.168.39.147
	I1114 15:53:42.122945  876220 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:53:42.123106  876220 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:53:42.123148  876220 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:53:42.123226  876220 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/client.key
	I1114 15:53:42.123290  876220 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key.a88b087d
	I1114 15:53:42.123322  876220 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key
	I1114 15:53:42.123430  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:53:42.123456  876220 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:53:42.123467  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:53:42.123486  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:53:42.123517  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:53:42.123541  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:53:42.123584  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:42.124261  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:53:42.149787  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:53:42.177563  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:53:42.203326  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:53:42.228832  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:53:42.254674  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:53:42.280548  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:53:42.305318  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:53:42.331461  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:53:42.356555  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:53:42.382688  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:53:42.407945  876220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:53:42.424902  876220 ssh_runner.go:195] Run: openssl version
	I1114 15:53:42.430411  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:53:42.443033  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448429  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448496  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.455631  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:53:42.466421  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:53:42.476013  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480381  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480434  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.486048  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:53:42.495375  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:53:42.505366  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509762  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509804  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.515519  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:53:42.524838  876220 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:53:42.528912  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:53:42.534641  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:53:42.540138  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:53:42.545849  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:53:42.551518  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:53:42.559001  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:53:42.566135  876220 kubeadm.go:404] StartCluster: {Name:embed-certs-279880 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:53:42.566241  876220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:53:42.566297  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:42.613075  876220 cri.go:89] found id: ""
	I1114 15:53:42.613158  876220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:53:42.622675  876220 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:53:42.622696  876220 kubeadm.go:636] restartCluster start
	I1114 15:53:42.622785  876220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:53:42.631529  876220 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.633202  876220 kubeconfig.go:92] found "embed-certs-279880" server: "https://192.168.39.147:8443"
	I1114 15:53:42.636588  876220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:53:42.645531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.645578  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.656733  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.656764  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.656807  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.667524  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.168290  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.168372  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.181051  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.668650  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.668772  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.681727  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.168359  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.168462  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.182012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.668666  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.668763  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.680872  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.168505  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.168625  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.180321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.667875  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.668016  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.680318  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.861352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:42.861900  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:42.861963  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:42.861836  877295 retry.go:31] will retry after 2.117184279s: waiting for machine to come up
	I1114 15:53:44.982059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:44.982506  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:44.982538  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:44.982449  877295 retry.go:31] will retry after 2.3999215s: waiting for machine to come up
	I1114 15:53:46.168271  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.168410  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.180809  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:46.667886  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.668009  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.679468  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.168072  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.168204  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.180268  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.667786  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.667948  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.678927  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.168531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.168660  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.180004  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.668597  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.668752  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.680945  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.168543  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.168635  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.180012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.668382  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.668486  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.683691  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.168265  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.168353  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.179169  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.667618  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.667730  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.678707  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.384177  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:47.384695  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:47.384734  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:47.384649  877295 retry.go:31] will retry after 2.820309413s: waiting for machine to come up
	I1114 15:53:50.208736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:50.209188  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:50.209221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:50.209130  877295 retry.go:31] will retry after 2.822648093s: waiting for machine to come up
	I1114 15:53:51.168046  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.168144  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.179168  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:51.668301  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.668407  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.680321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.167971  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:52.168062  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:52.179159  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.645656  876220 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:53:52.645688  876220 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:53:52.645702  876220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:53:52.645806  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:52.682368  876220 cri.go:89] found id: ""
	I1114 15:53:52.682482  876220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:53:52.697186  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:53:52.705449  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:53:52.705503  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714019  876220 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714054  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:52.831334  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.796131  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.984427  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.060195  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.137132  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:53:54.137217  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.155040  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.676264  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.176129  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.676614  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:53.034614  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:53.035044  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:53.035078  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:53.034993  877295 retry.go:31] will retry after 4.160398149s: waiting for machine to come up
	I1114 15:53:57.196776  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197211  876396 main.go:141] libmachine: (old-k8s-version-842105) Found IP for machine: 192.168.72.151
	I1114 15:53:57.197240  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserving static IP address...
	I1114 15:53:57.197260  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has current primary IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197667  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.197700  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserved static IP address: 192.168.72.151
	I1114 15:53:57.197724  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | skip adding static IP to network mk-old-k8s-version-842105 - found existing host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"}
	I1114 15:53:57.197742  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting for SSH to be available...
	I1114 15:53:57.197754  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Getting to WaitForSSH function...
	I1114 15:53:57.200279  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200646  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.200681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200916  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH client type: external
	I1114 15:53:57.200948  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa (-rw-------)
	I1114 15:53:57.200983  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:57.200999  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | About to run SSH command:
	I1114 15:53:57.201015  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | exit 0
	I1114 15:53:57.288554  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:57.288904  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetConfigRaw
	I1114 15:53:57.289691  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.292087  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292445  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.292501  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292720  876396 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/config.json ...
	I1114 15:53:57.292930  876396 machine.go:88] provisioning docker machine ...
	I1114 15:53:57.292950  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:57.293164  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293326  876396 buildroot.go:166] provisioning hostname "old-k8s-version-842105"
	I1114 15:53:57.293352  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293472  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.295765  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296139  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.296170  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296299  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.296470  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296625  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296768  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.296945  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.297524  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.297546  876396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-842105 && echo "old-k8s-version-842105" | sudo tee /etc/hostname
	I1114 15:53:58.537304  876668 start.go:369] acquired machines lock for "default-k8s-diff-port-529430" in 4m8.43196122s
	I1114 15:53:58.537380  876668 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:58.537392  876668 fix.go:54] fixHost starting: 
	I1114 15:53:58.537828  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:58.537865  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:58.555361  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I1114 15:53:58.555809  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:58.556346  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:53:58.556379  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:58.556762  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:58.556993  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:53:58.557144  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:53:58.558707  876668 fix.go:102] recreateIfNeeded on default-k8s-diff-port-529430: state=Stopped err=<nil>
	I1114 15:53:58.558736  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	W1114 15:53:58.558888  876668 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:58.561175  876668 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-529430" ...
	I1114 15:53:57.423888  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-842105
	
	I1114 15:53:57.423971  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.427115  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427421  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.427459  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427660  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.427882  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428135  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428351  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.428584  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.429089  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.429124  876396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-842105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-842105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-842105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:57.554847  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:57.554893  876396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:57.554957  876396 buildroot.go:174] setting up certificates
	I1114 15:53:57.554974  876396 provision.go:83] configureAuth start
	I1114 15:53:57.554989  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.555342  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.558305  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.558711  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558876  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.561568  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.561937  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.561973  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.562106  876396 provision.go:138] copyHostCerts
	I1114 15:53:57.562196  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:57.562218  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:57.562284  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:57.562402  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:57.562413  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:57.562445  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:57.562520  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:57.562532  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:57.562561  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:57.562631  876396 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-842105 san=[192.168.72.151 192.168.72.151 localhost 127.0.0.1 minikube old-k8s-version-842105]
	I1114 15:53:57.825621  876396 provision.go:172] copyRemoteCerts
	I1114 15:53:57.825706  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:57.825739  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.828352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828732  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.828778  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828924  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.829159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.829356  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.829505  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:57.913614  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:57.935960  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 15:53:57.957927  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:53:57.980061  876396 provision.go:86] duration metric: configureAuth took 425.071777ms
	I1114 15:53:57.980109  876396 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:57.980308  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:53:57.980405  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.983736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984128  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.984161  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984367  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.984574  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984927  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.985116  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.985478  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.985505  876396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:58.297063  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:58.297107  876396 machine.go:91] provisioned docker machine in 1.004160647s
	I1114 15:53:58.297121  876396 start.go:300] post-start starting for "old-k8s-version-842105" (driver="kvm2")
	I1114 15:53:58.297135  876396 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:58.297159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.297578  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:58.297624  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.300608  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301051  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.301081  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301312  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.301485  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.301655  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.301774  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.387785  876396 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:58.391947  876396 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:58.391974  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:58.392056  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:58.392177  876396 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:58.392301  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:58.401525  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:58.422853  876396 start.go:303] post-start completed in 125.713467ms
	I1114 15:53:58.422892  876396 fix.go:56] fixHost completed within 22.732917848s
	I1114 15:53:58.422922  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.425682  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.426098  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426282  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.426487  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426663  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426830  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.427040  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:58.427400  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:58.427416  876396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:58.537121  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977238.485050071
	
	I1114 15:53:58.537151  876396 fix.go:206] guest clock: 1699977238.485050071
	I1114 15:53:58.537161  876396 fix.go:219] Guest: 2023-11-14 15:53:58.485050071 +0000 UTC Remote: 2023-11-14 15:53:58.422897103 +0000 UTC m=+286.112017318 (delta=62.152968ms)
	I1114 15:53:58.537187  876396 fix.go:190] guest clock delta is within tolerance: 62.152968ms
	I1114 15:53:58.537206  876396 start.go:83] releasing machines lock for "old-k8s-version-842105", held for 22.847251095s
	I1114 15:53:58.537248  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.537548  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:58.540515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.540932  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.540974  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.541110  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541612  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541912  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.542012  876396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:58.542077  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.542171  876396 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:58.542202  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.544841  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545190  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545246  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545465  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.545666  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.545694  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545714  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545816  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.545905  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.546006  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.546067  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.546212  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.546365  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.626301  876396 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:58.651834  876396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:58.799770  876396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:58.806042  876396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:58.806134  876396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:58.824707  876396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:58.824752  876396 start.go:472] detecting cgroup driver to use...
	I1114 15:53:58.824824  876396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:58.840144  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:58.854846  876396 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:58.854905  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:58.869926  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:58.883517  876396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:58.990519  876396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:59.108637  876396 docker.go:219] disabling docker service ...
	I1114 15:53:59.108712  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:59.124681  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:59.138748  876396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:59.260422  876396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:59.364365  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:59.376773  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:59.394948  876396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1114 15:53:59.395027  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.404000  876396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:59.404074  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.412822  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.421316  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.429685  876396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:59.438818  876396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:59.446459  876396 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:59.446533  876396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:59.459160  876396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:59.467670  876396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:59.579125  876396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:59.794436  876396 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:59.794525  876396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:59.801013  876396 start.go:540] Will wait 60s for crictl version
	I1114 15:53:59.801095  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:53:59.805735  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:59.851270  876396 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:59.851383  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.898885  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.953911  876396 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1114 15:53:58.562788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Start
	I1114 15:53:58.562971  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring networks are active...
	I1114 15:53:58.563570  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network default is active
	I1114 15:53:58.564001  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network mk-default-k8s-diff-port-529430 is active
	I1114 15:53:58.564406  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Getting domain xml...
	I1114 15:53:58.565186  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Creating domain...
	I1114 15:53:59.907130  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting to get IP...
	I1114 15:53:59.908507  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.908991  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.909128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:53:59.908977  877437 retry.go:31] will retry after 306.122553ms: waiting for machine to come up
	I1114 15:53:56.176595  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.676568  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.699015  876220 api_server.go:72] duration metric: took 2.561885213s to wait for apiserver process to appear ...
	I1114 15:53:56.699041  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:53:56.699058  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:53:59.955466  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:59.959121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959545  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:59.959572  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959896  876396 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:59.965859  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:59.982494  876396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 15:53:59.982563  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:00.029401  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:00.029483  876396 ssh_runner.go:195] Run: which lz4
	I1114 15:54:00.034065  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:00.039738  876396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:00.039780  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1114 15:54:01.846049  876396 crio.go:444] Took 1.812024 seconds to copy over tarball
	I1114 15:54:01.846160  876396 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:01.387625  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.387668  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.387690  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.430505  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.430539  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.930801  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.937138  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:01.937169  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.431712  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.442719  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:02.442758  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.931021  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.938062  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:54:02.947420  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:02.947453  876220 api_server.go:131] duration metric: took 6.248404315s to wait for apiserver health ...
	I1114 15:54:02.947465  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:54:02.947479  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:02.949231  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:00.216693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217419  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217476  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.217346  877437 retry.go:31] will retry after 276.469735ms: waiting for machine to come up
	I1114 15:54:00.496200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496596  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496632  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.496550  877437 retry.go:31] will retry after 390.20616ms: waiting for machine to come up
	I1114 15:54:00.888367  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889341  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.889235  877437 retry.go:31] will retry after 551.896336ms: waiting for machine to come up
	I1114 15:54:01.443159  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443794  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443825  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:01.443756  877437 retry.go:31] will retry after 655.228992ms: waiting for machine to come up
	I1114 15:54:02.100194  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100681  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100716  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.100609  877437 retry.go:31] will retry after 896.817469ms: waiting for machine to come up
	I1114 15:54:02.999296  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999947  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999979  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.999897  877437 retry.go:31] will retry after 1.177419274s: waiting for machine to come up
	I1114 15:54:04.178783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179425  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179452  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:04.179351  877437 retry.go:31] will retry after 1.259348434s: waiting for machine to come up
	I1114 15:54:02.950643  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:02.986775  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:03.054339  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:03.074346  876220 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:03.074405  876220 system_pods.go:61] "coredns-5dd5756b68-gqxld" [0b846e58-0bbc-4770-94a4-8324753b36c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:03.074428  876220 system_pods.go:61] "etcd-embed-certs-279880" [e085e7a7-ec2e-4cf6-bbb2-d242a5e8d075] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:03.074442  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [4ffbfbaf-9978-4bb1-9e4e-ef23365f78fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:03.074455  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [d895906c-899f-41b3-9484-1a6985b978f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:03.074471  876220 system_pods.go:61] "kube-proxy-j2qnm" [feee8604-a749-4908-8361-42f63d55ec64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:03.074485  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [4325a0ba-9013-4899-b01b-befcb4cd5b72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:03.074504  876220 system_pods.go:61] "metrics-server-57f55c9bc5-gvtbw" [a7c44219-4b00-49c0-817f-68f9499f1ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:03.074531  876220 system_pods.go:61] "storage-provisioner" [f464123e-8329-4785-87ae-78ff30ac7d27] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:03.074547  876220 system_pods.go:74] duration metric: took 20.179327ms to wait for pod list to return data ...
	I1114 15:54:03.074558  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:03.078482  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:03.078526  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:03.078542  876220 node_conditions.go:105] duration metric: took 3.972732ms to run NodePressure ...
	I1114 15:54:03.078565  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:03.514232  876220 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521097  876220 kubeadm.go:787] kubelet initialised
	I1114 15:54:03.521125  876220 kubeadm.go:788] duration metric: took 6.859971ms waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521168  876220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:03.528777  876220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:05.249338  876396 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.403140591s)
	I1114 15:54:05.249383  876396 crio.go:451] Took 3.403300 seconds to extract the tarball
	I1114 15:54:05.249397  876396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:05.298779  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:05.351838  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:05.351873  876396 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:05.352034  876396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.352124  876396 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.352201  876396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.352219  876396 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.352067  876396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.352087  876396 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354089  876396 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1114 15:54:05.354101  876396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.354115  876396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.354117  876396 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354097  876396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.354178  876396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.354197  876396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.354270  876396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.512829  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.521658  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.529228  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1114 15:54:05.529451  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.529597  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.529802  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.534672  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.613591  876396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1114 15:54:05.613650  876396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.613721  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.644613  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.668090  876396 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1114 15:54:05.668167  876396 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.668231  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.685343  876396 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1114 15:54:05.685398  876396 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1114 15:54:05.685458  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725459  876396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1114 15:54:05.725508  876396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.725523  876396 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1114 15:54:05.725561  876396 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.725565  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725602  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727180  876396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1114 15:54:05.727215  876396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.727249  876396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1114 15:54:05.727283  876396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.727254  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727322  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.727325  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.849608  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.849657  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1114 15:54:05.849694  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.849747  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.849753  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.849830  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1114 15:54:05.849847  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.990379  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1114 15:54:05.990536  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1114 15:54:06.006943  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1114 15:54:06.006966  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1114 15:54:06.007017  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1114 15:54:06.007076  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.007134  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1114 15:54:06.013121  876396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1114 15:54:06.013141  876396 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.013192  876396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1114 15:54:05.440685  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441307  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441342  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:05.441243  877437 retry.go:31] will retry after 1.84307404s: waiting for machine to come up
	I1114 15:54:07.286027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286581  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286612  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:07.286501  877437 retry.go:31] will retry after 2.149522769s: waiting for machine to come up
	I1114 15:54:09.437500  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.437998  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.438027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:09.437930  877437 retry.go:31] will retry after 1.825733531s: waiting for machine to come up
	I1114 15:54:06.558998  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.056443  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.550292  876220 pod_ready.go:92] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:09.550325  876220 pod_ready.go:81] duration metric: took 6.02152032s waiting for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:09.550338  876220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:07.587512  876396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.574275406s)
	I1114 15:54:07.587549  876396 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1114 15:54:07.587609  876396 cache_images.go:92] LoadImages completed in 2.235719587s
	W1114 15:54:07.587745  876396 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1114 15:54:07.587935  876396 ssh_runner.go:195] Run: crio config
	I1114 15:54:07.677561  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:07.677590  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:07.677624  876396 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:07.677649  876396 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-842105 NodeName:old-k8s-version-842105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 15:54:07.677852  876396 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-842105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-842105
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.151:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:07.677991  876396 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-842105 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:54:07.678072  876396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1114 15:54:07.690041  876396 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:07.690195  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:07.699428  876396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1114 15:54:07.717871  876396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:07.736451  876396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1114 15:54:07.760405  876396 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:07.766002  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:07.782987  876396 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105 for IP: 192.168.72.151
	I1114 15:54:07.783024  876396 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:07.783232  876396 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:07.783328  876396 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:07.783435  876396 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/client.key
	I1114 15:54:07.783530  876396 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key.8e16fdf2
	I1114 15:54:07.783587  876396 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key
	I1114 15:54:07.783733  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:07.783774  876396 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:07.783788  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:07.783825  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:07.783860  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:07.783903  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:07.783976  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:07.784951  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:07.817959  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:07.849497  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:07.882885  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:54:07.917706  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:07.951168  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:07.980449  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:08.004910  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:08.038634  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:08.068999  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:08.099934  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:08.131714  876396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:08.150662  876396 ssh_runner.go:195] Run: openssl version
	I1114 15:54:08.158258  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:08.168218  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173533  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173650  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.179886  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:08.189654  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:08.199563  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204439  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204512  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.210587  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:08.220509  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:08.233859  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240418  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240484  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.248025  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:08.261693  876396 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:08.267518  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:08.275553  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:08.283812  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:08.292063  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:08.299976  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:08.307726  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:08.315248  876396 kubeadm.go:404] StartCluster: {Name:old-k8s-version-842105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:08.315441  876396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:08.315509  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:08.373222  876396 cri.go:89] found id: ""
	I1114 15:54:08.373309  876396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:08.386081  876396 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:08.386113  876396 kubeadm.go:636] restartCluster start
	I1114 15:54:08.386175  876396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:08.398113  876396 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.399779  876396 kubeconfig.go:92] found "old-k8s-version-842105" server: "https://192.168.72.151:8443"
	I1114 15:54:08.403355  876396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:08.415044  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.415107  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.431221  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.431246  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.431301  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.441629  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.941906  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.942002  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.953895  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.442080  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.442167  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.454396  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.941960  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.942060  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.957741  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.442467  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.442585  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.459029  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.942110  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.942218  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.958207  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.441724  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.441846  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.456551  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.942092  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.942207  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.954734  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.265162  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265717  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:11.265645  877437 retry.go:31] will retry after 3.454522942s: waiting for machine to come up
	I1114 15:54:14.722448  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722869  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722900  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:14.722811  877437 retry.go:31] will retry after 4.385736497s: waiting for machine to come up
	I1114 15:54:11.568989  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:11.569021  876220 pod_ready.go:81] duration metric: took 2.018672405s waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:11.569032  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:13.599380  876220 pod_ready.go:102] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:15.095781  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.095806  876220 pod_ready.go:81] duration metric: took 3.52676767s waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.095816  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101837  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.101860  876220 pod_ready.go:81] duration metric: took 6.035008ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101871  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107099  876220 pod_ready.go:92] pod "kube-proxy-j2qnm" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.107119  876220 pod_ready.go:81] duration metric: took 5.239707ms waiting for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107131  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146726  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.146753  876220 pod_ready.go:81] duration metric: took 39.614218ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146765  876220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:12.442685  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.442780  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.456555  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:12.941805  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.941902  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.955572  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.442111  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.442220  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.455769  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.941932  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.942051  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.957167  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.442727  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.442855  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.455220  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.941815  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.941911  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.955030  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.441942  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.442064  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.454228  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.942207  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.942299  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.955845  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.442537  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.442642  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.454339  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.941837  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.941933  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.955292  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:19.110067  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.110621  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Found IP for machine: 192.168.61.196
	I1114 15:54:19.110650  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserving static IP address...
	I1114 15:54:19.110682  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has current primary IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.111082  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.111142  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | skip adding static IP to network mk-default-k8s-diff-port-529430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"}
	I1114 15:54:19.111163  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserved static IP address: 192.168.61.196
	I1114 15:54:19.111178  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for SSH to be available...
	I1114 15:54:19.111191  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Getting to WaitForSSH function...
	I1114 15:54:19.113739  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114145  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.114196  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114327  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH client type: external
	I1114 15:54:19.114358  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa (-rw-------)
	I1114 15:54:19.114395  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:19.114417  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | About to run SSH command:
	I1114 15:54:19.114432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | exit 0
	I1114 15:54:19.213651  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:19.214087  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetConfigRaw
	I1114 15:54:19.214767  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.217678  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218072  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.218099  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218414  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:54:19.218634  876668 machine.go:88] provisioning docker machine ...
	I1114 15:54:19.218662  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:19.218923  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219132  876668 buildroot.go:166] provisioning hostname "default-k8s-diff-port-529430"
	I1114 15:54:19.219155  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219292  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.221719  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222106  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.222129  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222272  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.222435  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222606  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222748  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.222907  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.223312  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.223328  876668 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-529430 && echo "default-k8s-diff-port-529430" | sudo tee /etc/hostname
	I1114 15:54:19.373658  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-529430
	
	I1114 15:54:19.373691  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.376972  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377388  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.377432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377549  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.377754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.377934  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.378123  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.378325  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.378667  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.378685  876668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-529430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-529430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-529430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:19.523410  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:19.523453  876668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:19.523498  876668 buildroot.go:174] setting up certificates
	I1114 15:54:19.523511  876668 provision.go:83] configureAuth start
	I1114 15:54:19.523530  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.523872  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.526757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527213  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.527242  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.530193  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530590  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.530630  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530794  876668 provision.go:138] copyHostCerts
	I1114 15:54:19.530862  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:19.530886  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:19.530965  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:19.531069  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:19.531078  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:19.531104  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:19.531179  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:19.531188  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:19.531218  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:19.531285  876668 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-529430 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube default-k8s-diff-port-529430]
	I1114 15:54:19.845785  876668 provision.go:172] copyRemoteCerts
	I1114 15:54:19.845852  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:19.845880  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.849070  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849461  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.849492  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.849916  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.850139  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.850326  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:19.946041  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:19.976301  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1114 15:54:20.667697  876065 start.go:369] acquired machines lock for "no-preload-490998" in 59.048435079s
	I1114 15:54:20.667765  876065 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:54:20.667776  876065 fix.go:54] fixHost starting: 
	I1114 15:54:20.668233  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:20.668278  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:20.689041  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I1114 15:54:20.689574  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:20.690138  876065 main.go:141] libmachine: Using API Version  1
	I1114 15:54:20.690168  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:20.690554  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:20.690760  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:20.690909  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 15:54:20.692627  876065 fix.go:102] recreateIfNeeded on no-preload-490998: state=Stopped err=<nil>
	I1114 15:54:20.692652  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	W1114 15:54:20.692849  876065 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:54:20.694674  876065 out.go:177] * Restarting existing kvm2 VM for "no-preload-490998" ...
	I1114 15:54:17.454958  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:19.455250  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:20.001972  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:54:20.026531  876668 provision.go:86] duration metric: configureAuth took 502.998106ms
	I1114 15:54:20.026585  876668 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:20.026832  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:20.026965  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.030385  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.030791  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030974  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.031200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031423  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.031861  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.032341  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.032367  876668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:20.394771  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:20.394805  876668 machine.go:91] provisioned docker machine in 1.176155811s
	I1114 15:54:20.394818  876668 start.go:300] post-start starting for "default-k8s-diff-port-529430" (driver="kvm2")
	I1114 15:54:20.394832  876668 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:20.394853  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.395240  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:20.395288  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.398478  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.398906  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.398945  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.399107  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.399344  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.399584  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.399752  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.491251  876668 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:20.495507  876668 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:20.495538  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:20.495627  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:20.495718  876668 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:20.495814  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:20.504112  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:20.527100  876668 start.go:303] post-start completed in 132.264495ms
	I1114 15:54:20.527124  876668 fix.go:56] fixHost completed within 21.989733182s
	I1114 15:54:20.527150  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.530055  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530460  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.530502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530660  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.530868  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531069  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531281  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.531458  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.531874  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.531889  876668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:20.667502  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977260.612374456
	
	I1114 15:54:20.667529  876668 fix.go:206] guest clock: 1699977260.612374456
	I1114 15:54:20.667536  876668 fix.go:219] Guest: 2023-11-14 15:54:20.612374456 +0000 UTC Remote: 2023-11-14 15:54:20.527127621 +0000 UTC m=+270.585277055 (delta=85.246835ms)
	I1114 15:54:20.667591  876668 fix.go:190] guest clock delta is within tolerance: 85.246835ms
	I1114 15:54:20.667604  876668 start.go:83] releasing machines lock for "default-k8s-diff-port-529430", held for 22.130251397s
	I1114 15:54:20.667642  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.668017  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:20.671690  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672166  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.672199  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672583  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673190  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673412  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673507  876668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:20.673573  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.673677  876668 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:20.673702  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.677394  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677505  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677813  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.677847  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678133  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.678165  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678228  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678331  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678456  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678543  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678799  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.679008  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.770378  876668 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:20.799026  876668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:20.952410  876668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:20.960020  876668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:20.960164  876668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:20.976497  876668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:20.976537  876668 start.go:472] detecting cgroup driver to use...
	I1114 15:54:20.976623  876668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:20.995510  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:21.008750  876668 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:21.008824  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:21.021811  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:21.035329  876668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:21.148775  876668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:21.285242  876668 docker.go:219] disabling docker service ...
	I1114 15:54:21.285318  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:21.298782  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:21.316123  876668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:21.488090  876668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:21.618889  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:21.632974  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:21.655781  876668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:21.655882  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.669231  876668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:21.669316  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.678786  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.688193  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.698797  876668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:21.709360  876668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:21.718312  876668 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:21.718380  876668 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:21.736502  876668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:21.746439  876668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:21.863214  876668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:22.102179  876668 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:22.102265  876668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:22.108046  876668 start.go:540] Will wait 60s for crictl version
	I1114 15:54:22.108121  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:54:22.113795  876668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:22.165127  876668 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:22.165229  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.225931  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.294400  876668 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:17.442023  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.442115  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.454984  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:17.942288  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.942367  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.954587  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:18.415437  876396 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:18.415476  876396 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:18.415510  876396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:18.415594  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:18.457148  876396 cri.go:89] found id: ""
	I1114 15:54:18.457220  876396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:18.473763  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:18.482554  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:18.482618  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491282  876396 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491331  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:18.611750  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.639893  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.02808682s)
	I1114 15:54:19.639964  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.850775  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.939183  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:20.055296  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:20.055384  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.076978  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.591616  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.091982  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.591312  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.635294  876396 api_server.go:72] duration metric: took 1.579988958s to wait for apiserver process to appear ...
	I1114 15:54:21.635323  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:21.635345  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:20.696162  876065 main.go:141] libmachine: (no-preload-490998) Calling .Start
	I1114 15:54:20.696380  876065 main.go:141] libmachine: (no-preload-490998) Ensuring networks are active...
	I1114 15:54:20.697208  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network default is active
	I1114 15:54:20.697665  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network mk-no-preload-490998 is active
	I1114 15:54:20.698105  876065 main.go:141] libmachine: (no-preload-490998) Getting domain xml...
	I1114 15:54:20.698815  876065 main.go:141] libmachine: (no-preload-490998) Creating domain...
	I1114 15:54:22.152078  876065 main.go:141] libmachine: (no-preload-490998) Waiting to get IP...
	I1114 15:54:22.153475  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.153983  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.154071  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.153960  877583 retry.go:31] will retry after 305.242943ms: waiting for machine to come up
	I1114 15:54:22.460636  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.461432  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.461609  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.461568  877583 retry.go:31] will retry after 354.226558ms: waiting for machine to come up
	I1114 15:54:22.817225  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.817884  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.817999  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.817955  877583 retry.go:31] will retry after 337.727596ms: waiting for machine to come up
	I1114 15:54:23.157897  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.158614  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.158724  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.158679  877583 retry.go:31] will retry after 375.356441ms: waiting for machine to come up
	I1114 15:54:23.536061  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.536607  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.536633  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.536565  877583 retry.go:31] will retry after 652.853452ms: waiting for machine to come up
	I1114 15:54:22.295757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:22.299345  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.299749  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:22.299788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.300017  876668 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:22.305363  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:22.318715  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:22.318773  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:22.368522  876668 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:22.368595  876668 ssh_runner.go:195] Run: which lz4
	I1114 15:54:22.373798  876668 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:22.379337  876668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:22.379368  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:54:24.194028  876668 crio.go:444] Took 1.820276 seconds to copy over tarball
	I1114 15:54:24.194111  876668 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:21.457059  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:23.458432  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:26.636325  876396 api_server.go:269] stopped: https://192.168.72.151:8443/healthz: Get "https://192.168.72.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1114 15:54:26.636396  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:24.191080  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:24.191648  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:24.191685  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:24.191565  877583 retry.go:31] will retry after 883.93292ms: waiting for machine to come up
	I1114 15:54:25.076820  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:25.077325  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:25.077370  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:25.077290  877583 retry.go:31] will retry after 1.071889504s: waiting for machine to come up
	I1114 15:54:26.151239  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:26.151777  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:26.151812  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:26.151734  877583 retry.go:31] will retry after 1.05055701s: waiting for machine to come up
	I1114 15:54:27.204714  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:27.205193  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:27.205216  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:27.205147  877583 retry.go:31] will retry after 1.366779273s: waiting for machine to come up
	I1114 15:54:28.573131  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:28.573578  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:28.573605  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:28.573548  877583 retry.go:31] will retry after 1.629033633s: waiting for machine to come up
	I1114 15:54:27.635092  876668 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.440943465s)
	I1114 15:54:27.635134  876668 crio.go:451] Took 3.441078 seconds to extract the tarball
	I1114 15:54:27.635148  876668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:27.685486  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:27.742411  876668 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:54:27.742499  876668 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:54:27.742596  876668 ssh_runner.go:195] Run: crio config
	I1114 15:54:27.815555  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:27.815579  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:27.815601  876668 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:27.815624  876668 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-529430 NodeName:default-k8s-diff-port-529430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:54:27.815789  876668 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-529430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:27.815921  876668 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-529430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1114 15:54:27.815999  876668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:54:27.825716  876668 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:27.825799  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:27.838987  876668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1114 15:54:27.855187  876668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:27.872995  876668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1114 15:54:27.890455  876668 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:27.895678  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:27.909953  876668 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430 for IP: 192.168.61.196
	I1114 15:54:27.909999  876668 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:27.910204  876668 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:27.910271  876668 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:27.910463  876668 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/client.key
	I1114 15:54:27.910558  876668 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key.0d67e2f2
	I1114 15:54:27.910616  876668 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key
	I1114 15:54:27.910753  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:27.910797  876668 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:27.910811  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:27.910872  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:27.910917  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:27.910950  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:27.911007  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:27.911985  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:27.937341  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:27.963511  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:27.990011  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:54:28.016668  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:28.048528  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:28.077392  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:28.107784  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:28.136600  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:28.163995  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:28.191715  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:28.223205  876668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:28.243672  876668 ssh_runner.go:195] Run: openssl version
	I1114 15:54:28.249895  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:28.260568  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266792  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266887  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.273048  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:28.283458  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:28.294810  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300316  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300384  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.306193  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:28.319260  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:28.332843  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339044  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339120  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.346094  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:28.359711  876668 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:28.365300  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:28.372965  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:28.380378  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:28.387801  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:28.395228  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:28.401252  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:28.407435  876668 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:28.407581  876668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:28.407663  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:28.462877  876668 cri.go:89] found id: ""
	I1114 15:54:28.462962  876668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:28.473800  876668 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:28.473828  876668 kubeadm.go:636] restartCluster start
	I1114 15:54:28.473885  876668 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:28.485255  876668 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.486649  876668 kubeconfig.go:92] found "default-k8s-diff-port-529430" server: "https://192.168.61.196:8444"
	I1114 15:54:28.489408  876668 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:28.499927  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.499990  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.512175  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.512193  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.512238  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.524128  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.025143  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.025234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.040757  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.525035  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.525153  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.538214  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.174172  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:28.174207  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:28.674934  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.145414  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.145459  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.174596  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.231115  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.231157  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.674653  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.813013  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.813052  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.174424  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.183371  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:30.183427  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.675007  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.686069  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:54:30.697376  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:54:30.697472  876396 api_server.go:131] duration metric: took 9.062139934s to wait for apiserver health ...
	I1114 15:54:30.697503  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:30.697535  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:30.699476  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:25.957052  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:28.490572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:30.701025  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:30.729153  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:30.770856  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:30.785989  876396 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:30.786041  876396 system_pods.go:61] "coredns-5644d7b6d9-dxtd8" [4d22eb1f-551c-49a1-a519-7420c3774e46] Running
	I1114 15:54:30.786051  876396 system_pods.go:61] "etcd-old-k8s-version-842105" [d4d5d869-b609-4017-8cf1-071b11f69d18] Running
	I1114 15:54:30.786057  876396 system_pods.go:61] "kube-apiserver-old-k8s-version-842105" [43e84141-4938-4808-bba5-14080a0a7b9e] Running
	I1114 15:54:30.786063  876396 system_pods.go:61] "kube-controller-manager-old-k8s-version-842105" [8fca7797-f3a1-4223-a921-0819aca95ce7] Running
	I1114 15:54:30.786069  876396 system_pods.go:61] "kube-proxy-kw2ns" [c6b5fbe3-a473-4120-bc41-fb85f6d3841d] Running
	I1114 15:54:30.786074  876396 system_pods.go:61] "kube-scheduler-old-k8s-version-842105" [c9cad8bb-b7a9-44fd-92d3-d3360284c9f3] Running
	I1114 15:54:30.786082  876396 system_pods.go:61] "metrics-server-74d5856cc6-q9hc5" [1333b6de-5f3f-4937-8e73-d2b7f2c6d37e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:30.786091  876396 system_pods.go:61] "storage-provisioner" [2d95ef7e-626e-4840-9f5d-708cd8c66576] Running
	I1114 15:54:30.786107  876396 system_pods.go:74] duration metric: took 15.207693ms to wait for pod list to return data ...
	I1114 15:54:30.786125  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:30.799034  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:30.799089  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:30.799105  876396 node_conditions.go:105] duration metric: took 12.974469ms to run NodePressure ...
	I1114 15:54:30.799137  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:31.065040  876396 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:31.068697  876396 retry.go:31] will retry after 147.435912ms: kubelet not initialised
	I1114 15:54:31.225671  876396 retry.go:31] will retry after 334.031544ms: kubelet not initialised
	I1114 15:54:31.565487  876396 retry.go:31] will retry after 641.328262ms: kubelet not initialised
	I1114 15:54:32.215327  876396 retry.go:31] will retry after 1.211422414s: kubelet not initialised
	I1114 15:54:30.204276  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:30.204775  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:30.204811  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:30.204713  877583 retry.go:31] will retry after 1.909641151s: waiting for machine to come up
	I1114 15:54:32.115658  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:32.116175  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:32.116209  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:32.116116  877583 retry.go:31] will retry after 3.266336566s: waiting for machine to come up
	I1114 15:54:30.024900  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.025024  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.041104  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.524842  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.524920  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.540643  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.025166  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.040723  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.525252  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.525364  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.537978  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.024495  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.024626  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.037625  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.524934  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.525053  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.540579  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.025237  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.025366  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.037675  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.524206  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.524300  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.537100  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.025150  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.039435  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.525030  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.525140  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.541014  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.957869  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.458285  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:35.458815  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.432677  876396 retry.go:31] will retry after 864.36813ms: kubelet not initialised
	I1114 15:54:34.302450  876396 retry.go:31] will retry after 2.833071739s: kubelet not initialised
	I1114 15:54:37.142128  876396 retry.go:31] will retry after 2.880672349s: kubelet not initialised
	I1114 15:54:35.386010  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:35.386483  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:35.386526  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:35.386417  877583 retry.go:31] will retry after 3.791360608s: waiting for machine to come up
	I1114 15:54:35.024814  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.024924  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.038035  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:35.524433  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.524540  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.538065  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.024585  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.024690  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.036540  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.525201  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.525293  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.537751  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.024292  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.024388  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.037480  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.525115  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.525234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.538365  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.025002  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:38.025148  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:38.036994  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.500770  876668 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:38.500813  876668 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:38.500860  876668 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:38.500951  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:38.538468  876668 cri.go:89] found id: ""
	I1114 15:54:38.538571  876668 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:38.554809  876668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:38.563961  876668 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:38.564025  876668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572905  876668 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572930  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:38.694403  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.614869  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.815977  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.914051  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:37.956992  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.957705  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.179165  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.179746  876065 main.go:141] libmachine: (no-preload-490998) Found IP for machine: 192.168.50.251
	I1114 15:54:39.179773  876065 main.go:141] libmachine: (no-preload-490998) Reserving static IP address...
	I1114 15:54:39.179792  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has current primary IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.180259  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.180295  876065 main.go:141] libmachine: (no-preload-490998) Reserved static IP address: 192.168.50.251
	I1114 15:54:39.180328  876065 main.go:141] libmachine: (no-preload-490998) DBG | skip adding static IP to network mk-no-preload-490998 - found existing host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"}
	I1114 15:54:39.180349  876065 main.go:141] libmachine: (no-preload-490998) DBG | Getting to WaitForSSH function...
	I1114 15:54:39.180368  876065 main.go:141] libmachine: (no-preload-490998) Waiting for SSH to be available...
	I1114 15:54:39.182637  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183005  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.183037  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183157  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH client type: external
	I1114 15:54:39.183185  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa (-rw-------)
	I1114 15:54:39.183218  876065 main.go:141] libmachine: (no-preload-490998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:39.183239  876065 main.go:141] libmachine: (no-preload-490998) DBG | About to run SSH command:
	I1114 15:54:39.183251  876065 main.go:141] libmachine: (no-preload-490998) DBG | exit 0
	I1114 15:54:39.276793  876065 main.go:141] libmachine: (no-preload-490998) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:39.277095  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetConfigRaw
	I1114 15:54:39.277799  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.281002  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281360  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.281393  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281696  876065 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/config.json ...
	I1114 15:54:39.281970  876065 machine.go:88] provisioning docker machine ...
	I1114 15:54:39.281997  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:39.282236  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282395  876065 buildroot.go:166] provisioning hostname "no-preload-490998"
	I1114 15:54:39.282416  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.285099  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285498  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.285527  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285695  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.285865  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286026  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286277  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.286523  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.286978  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.287007  876065 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-490998 && echo "no-preload-490998" | sudo tee /etc/hostname
	I1114 15:54:39.419452  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-490998
	
	I1114 15:54:39.419493  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.422544  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.422912  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.422951  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.423134  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.423360  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423591  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423756  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.423915  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.424324  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.424363  876065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-490998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-490998/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-490998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:39.552044  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:39.552085  876065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:39.552106  876065 buildroot.go:174] setting up certificates
	I1114 15:54:39.552118  876065 provision.go:83] configureAuth start
	I1114 15:54:39.552127  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.552438  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.555275  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555660  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.555771  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555936  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.558628  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559004  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.559042  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559181  876065 provision.go:138] copyHostCerts
	I1114 15:54:39.559247  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:39.559273  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:39.559337  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:39.559498  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:39.559512  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:39.559547  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:39.559612  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:39.559620  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:39.559644  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:39.559697  876065 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.no-preload-490998 san=[192.168.50.251 192.168.50.251 localhost 127.0.0.1 minikube no-preload-490998]
	I1114 15:54:39.728218  876065 provision.go:172] copyRemoteCerts
	I1114 15:54:39.728286  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:39.728314  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.731482  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.731920  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.731966  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.732138  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.732376  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.732605  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.732802  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:39.819537  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:39.848716  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 15:54:39.876339  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:54:39.917428  876065 provision.go:86] duration metric: configureAuth took 365.293803ms
	I1114 15:54:39.917461  876065 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:39.917686  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:39.917783  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.920823  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921417  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.921457  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921785  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.921989  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922170  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922351  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.922516  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.922992  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.923017  876065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:40.270821  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:40.270851  876065 machine.go:91] provisioned docker machine in 988.864728ms
	I1114 15:54:40.270865  876065 start.go:300] post-start starting for "no-preload-490998" (driver="kvm2")
	I1114 15:54:40.270878  876065 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:40.270910  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.271296  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:40.271331  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.274197  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274517  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.274547  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274784  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.275045  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.275209  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.275379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.363810  876065 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:40.368485  876065 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:40.368515  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:40.368599  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:40.368688  876065 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:40.368820  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:40.378691  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:40.401789  876065 start.go:303] post-start completed in 130.90895ms
	I1114 15:54:40.401816  876065 fix.go:56] fixHost completed within 19.734039545s
	I1114 15:54:40.401848  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.404413  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404791  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.404824  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404962  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.405212  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405442  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405614  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.405840  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:40.406318  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:40.406338  876065 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:40.521875  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977280.490539427
	
	I1114 15:54:40.521907  876065 fix.go:206] guest clock: 1699977280.490539427
	I1114 15:54:40.521917  876065 fix.go:219] Guest: 2023-11-14 15:54:40.490539427 +0000 UTC Remote: 2023-11-14 15:54:40.401821935 +0000 UTC m=+361.372113130 (delta=88.717492ms)
	I1114 15:54:40.521945  876065 fix.go:190] guest clock delta is within tolerance: 88.717492ms
	I1114 15:54:40.521952  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 19.854220019s
	I1114 15:54:40.521990  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.522294  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:40.525204  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525567  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.525611  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525786  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526412  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526589  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526682  876065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:40.526727  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.526847  876065 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:40.526881  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.529470  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529673  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529863  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.529895  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530047  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530189  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.530224  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530226  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530415  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530480  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530594  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530677  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.530726  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530881  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.634647  876065 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:40.641680  876065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:40.784919  876065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:40.791364  876065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:40.791466  876065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:40.814464  876065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:40.814496  876065 start.go:472] detecting cgroup driver to use...
	I1114 15:54:40.814608  876065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:40.834599  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:40.851666  876065 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:40.851761  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:40.870359  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:40.885345  876065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:41.042220  876065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:41.174015  876065 docker.go:219] disabling docker service ...
	I1114 15:54:41.174101  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:41.188849  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:41.201322  876065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:41.329124  876065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:41.456116  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:41.477162  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:41.497860  876065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:41.497932  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.509750  876065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:41.509843  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.521944  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.532916  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.545469  876065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:41.556976  876065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:41.567322  876065 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:41.567401  876065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:41.583043  876065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:41.593941  876065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:41.717384  876065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:41.907278  876065 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:41.907351  876065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:41.912763  876065 start.go:540] Will wait 60s for crictl version
	I1114 15:54:41.912843  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:41.917105  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:41.965326  876065 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:41.965418  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.016065  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.079721  876065 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:40.028538  876396 retry.go:31] will retry after 2.943912692s: kubelet not initialised
	I1114 15:54:42.081301  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:42.084358  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.084771  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:42.084805  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.085014  876065 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:42.089551  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:42.102676  876065 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:42.102730  876065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:42.145434  876065 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:42.145479  876065 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:42.145570  876065 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.145592  876065 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.145621  876065 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.145620  876065 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.145662  876065 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1114 15:54:42.145692  876065 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.145819  876065 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.145564  876065 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.147966  876065 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.147967  876065 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.148056  876065 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1114 15:54:42.147970  876065 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.148093  876065 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.147960  876065 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.318368  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1114 15:54:42.318578  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.325647  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.340363  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.375378  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.473131  876065 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1114 15:54:42.473195  876065 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.473202  876065 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1114 15:54:42.473235  876065 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.473253  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.473283  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.511600  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.554432  876065 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1114 15:54:42.554502  876065 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1114 15:54:42.554572  876065 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.554599  876065 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1114 15:54:42.554618  876065 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.554632  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554657  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554532  876065 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.554724  876065 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1114 15:54:42.554750  876065 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.554776  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554778  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554907  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.554969  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.576922  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.577004  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.577114  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.577535  876065 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1114 15:54:42.577591  876065 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.577631  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.655186  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.655318  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1114 15:54:42.655449  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1114 15:54:42.655473  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:42.655536  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.706186  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1114 15:54:42.706257  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.706283  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1114 15:54:42.706304  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:42.706372  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:42.706408  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1114 15:54:42.706548  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:42.737003  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1114 15:54:42.737032  876065 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737093  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737102  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1114 15:54:42.737179  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1114 15:54:42.737237  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:42.769211  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1114 15:54:42.769251  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1114 15:54:42.769304  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1114 15:54:42.769289  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1114 15:54:42.769428  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:54:44.006164  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.268897316s)
	I1114 15:54:44.006206  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1114 15:54:44.006240  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.236783751s)
	I1114 15:54:44.006275  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1114 15:54:44.006283  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.269163879s)
	I1114 15:54:44.006297  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1114 15:54:44.006322  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:44.006375  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:40.016931  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:40.017030  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.030798  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.541996  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.042023  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.542537  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.042880  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.542514  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.577021  876668 api_server.go:72] duration metric: took 2.560093027s to wait for apiserver process to appear ...
	I1114 15:54:42.577059  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:42.577088  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.577767  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:42.577805  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.578225  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:43.078953  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.457425  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:44.460290  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:42.978588  876396 retry.go:31] will retry after 5.776997827s: kubelet not initialised
	I1114 15:54:46.326192  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.326231  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.326249  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.390609  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.390668  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.579140  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.590569  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:46.590606  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.079186  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.084460  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.084483  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.578774  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.588878  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.588919  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:48.079047  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:48.084809  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:54:48.098877  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:48.098941  876668 api_server.go:131] duration metric: took 5.521873886s to wait for apiserver health ...
	I1114 15:54:48.098955  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:48.098972  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:48.101010  876668 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:47.219243  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.212835904s)
	I1114 15:54:47.219281  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1114 15:54:47.219308  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:47.219472  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:48.102440  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:48.154163  876668 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:48.212336  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:48.229819  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:48.229862  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:48.229874  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:48.229886  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:48.229896  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:48.229905  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:48.229913  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:48.229923  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:48.229934  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:48.229944  876668 system_pods.go:74] duration metric: took 17.577706ms to wait for pod list to return data ...
	I1114 15:54:48.229961  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:48.236002  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:48.236043  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:48.236057  876668 node_conditions.go:105] duration metric: took 6.089691ms to run NodePressure ...
	I1114 15:54:48.236093  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:48.608191  876668 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622192  876668 kubeadm.go:787] kubelet initialised
	I1114 15:54:48.622221  876668 kubeadm.go:788] duration metric: took 13.999979ms waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622232  876668 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:48.629670  876668 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.636566  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636594  876668 pod_ready.go:81] duration metric: took 6.892422ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.636611  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636619  876668 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.643982  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644013  876668 pod_ready.go:81] duration metric: took 7.383826ms waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.644030  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644037  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.649791  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649815  876668 pod_ready.go:81] duration metric: took 5.769971ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.649825  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649833  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.655071  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655100  876668 pod_ready.go:81] duration metric: took 5.259243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.655113  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655121  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.018817  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018849  876668 pod_ready.go:81] duration metric: took 363.719341ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.018863  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018872  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.417556  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417588  876668 pod_ready.go:81] duration metric: took 398.704259ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.417600  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417607  876668 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.816654  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816692  876668 pod_ready.go:81] duration metric: took 399.075859ms waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.816712  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816721  876668 pod_ready.go:38] duration metric: took 1.194471296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:49.816765  876668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:54:49.830335  876668 ops.go:34] apiserver oom_adj: -16
	I1114 15:54:49.830363  876668 kubeadm.go:640] restartCluster took 21.356528166s
	I1114 15:54:49.830372  876668 kubeadm.go:406] StartCluster complete in 21.422955285s
	I1114 15:54:49.830390  876668 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.830502  876668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:54:49.832470  876668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.859435  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:54:49.859707  876668 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:54:49.859810  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:49.859852  876668 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859873  876668 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859885  876668 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-529430"
	I1114 15:54:49.859892  876668 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-529430"
	W1114 15:54:49.859895  876668 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:54:49.859954  876668 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859973  876668 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.859981  876668 addons.go:240] addon metrics-server should already be in state true
	I1114 15:54:49.860025  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.859956  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.860306  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860345  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860438  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860452  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860489  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860491  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.866006  876668 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-529430" context rescaled to 1 replicas
	I1114 15:54:49.866053  876668 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:54:49.878650  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I1114 15:54:49.878976  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I1114 15:54:49.879627  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I1114 15:54:49.891649  876668 out.go:177] * Verifying Kubernetes components...
	I1114 15:54:49.893450  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:54:49.892232  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892275  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892329  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.894259  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894282  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894473  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894486  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894610  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894623  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894687  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894892  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.894952  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894993  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.895598  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.895642  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.896296  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.896321  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.899095  876668 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.899120  876668 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:54:49.899151  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.899576  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.899622  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.917834  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I1114 15:54:49.917842  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I1114 15:54:49.918442  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.918505  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.919007  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919026  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919167  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919187  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919493  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919562  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919803  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.920191  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.920237  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.922764  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1114 15:54:49.922969  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.924925  876668 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:49.923380  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.926603  876668 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:49.926625  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:54:49.926647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.927991  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.928012  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.928459  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.928683  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.930696  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.930740  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.931131  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.931154  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.931330  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.931491  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.931647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.931775  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.934128  876668 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:54:49.936007  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:54:49.936031  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:54:49.936056  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.939725  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.939782  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I1114 15:54:49.940336  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.940442  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.940467  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.940822  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.941060  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.941093  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.941095  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.941211  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.941388  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.941856  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.942057  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.943639  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.943972  876668 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:49.943991  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:54:49.944009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.947172  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947631  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.947663  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947902  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.948102  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.948278  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.948579  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:46.955010  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:48.955172  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:50.066801  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:50.084526  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:54:50.084555  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:54:50.145315  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:50.145671  876668 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:50.146084  876668 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 15:54:50.151627  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:54:50.151646  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:54:50.216318  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:50.216349  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:54:50.316434  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:51.787528  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.642164298s)
	I1114 15:54:51.787644  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787672  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.787695  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.720847981s)
	I1114 15:54:51.787744  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788039  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788064  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788075  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788086  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788094  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788109  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788119  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790294  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790322  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.790327  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790349  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.803844  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.803875  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.804205  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.804238  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.804239  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.925929  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.609443677s)
	I1114 15:54:51.926001  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926019  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926385  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926429  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.926456  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926468  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926483  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926795  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926814  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926826  876668 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-529430"
	I1114 15:54:51.926829  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:52.146969  876668 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1114 15:54:48.761692  876396 retry.go:31] will retry after 7.067385779s: kubelet not initialised
	I1114 15:54:50.000157  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.780649338s)
	I1114 15:54:50.000194  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1114 15:54:50.000227  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:50.000281  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:52.291215  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (2.290903759s)
	I1114 15:54:52.291244  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1114 15:54:52.291271  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:52.291312  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:53.739008  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.447671823s)
	I1114 15:54:53.739041  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1114 15:54:53.739066  876065 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:53.739126  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:52.194351  876668 addons.go:502] enable addons completed in 2.33463136s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1114 15:54:52.220203  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:54.220773  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:50.957159  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:53.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.458026  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.834422  876396 retry.go:31] will retry after 18.847542128s: kubelet not initialised
	I1114 15:54:56.221753  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:56.720961  876668 node_ready.go:49] node "default-k8s-diff-port-529430" has status "Ready":"True"
	I1114 15:54:56.720989  876668 node_ready.go:38] duration metric: took 6.575288694s waiting for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:56.721001  876668 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:56.730382  876668 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736722  876668 pod_ready.go:92] pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:56.736761  876668 pod_ready.go:81] duration metric: took 6.345209ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736774  876668 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:58.773825  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:57.458580  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:59.956188  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:01.061681  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.322513643s)
	I1114 15:55:01.061716  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1114 15:55:01.061753  876065 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.061812  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.811277  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1114 15:55:01.811342  876065 cache_images.go:123] Successfully loaded all cached images
	I1114 15:55:01.811352  876065 cache_images.go:92] LoadImages completed in 19.665858366s
	I1114 15:55:01.811461  876065 ssh_runner.go:195] Run: crio config
	I1114 15:55:01.881576  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:01.881603  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:01.881622  876065 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:55:01.881646  876065 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-490998 NodeName:no-preload-490998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:55:01.881781  876065 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-490998"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:55:01.881859  876065 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-490998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:55:01.881918  876065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:55:01.892613  876065 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:55:01.892696  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:55:01.902267  876065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1114 15:55:01.919728  876065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:55:01.936188  876065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1114 15:55:01.954510  876065 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I1114 15:55:01.958337  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:55:01.970290  876065 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998 for IP: 192.168.50.251
	I1114 15:55:01.970328  876065 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:55:01.970513  876065 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:55:01.970563  876065 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:55:01.970662  876065 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/client.key
	I1114 15:55:01.970794  876065 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key.6b358a63
	I1114 15:55:01.970857  876065 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key
	I1114 15:55:01.971003  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:55:01.971065  876065 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:55:01.971079  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:55:01.971123  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:55:01.971160  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:55:01.971192  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:55:01.971252  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:55:01.972129  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:55:01.996012  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:55:02.020778  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:55:02.044395  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:55:02.066866  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:55:02.089331  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:55:02.113148  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:55:02.136083  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:55:02.157833  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:55:02.181150  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:55:02.203155  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:55:02.225839  876065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:55:02.243335  876065 ssh_runner.go:195] Run: openssl version
	I1114 15:55:02.249465  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:55:02.259874  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264340  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264401  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.270441  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:55:02.282031  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:55:02.293297  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298093  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298155  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.303668  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:55:02.315423  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:55:02.325976  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332124  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332194  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.339377  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:55:02.350318  876065 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:55:02.354796  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:55:02.360867  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:55:02.366306  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:55:02.372186  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:55:02.377900  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:55:02.383519  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:55:02.389128  876065 kubeadm.go:404] StartCluster: {Name:no-preload-490998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:55:02.389229  876065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:55:02.389304  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:02.428473  876065 cri.go:89] found id: ""
	I1114 15:55:02.428578  876065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:55:02.439944  876065 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:55:02.439969  876065 kubeadm.go:636] restartCluster start
	I1114 15:55:02.440079  876065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:55:02.450025  876065 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.451533  876065 kubeconfig.go:92] found "no-preload-490998" server: "https://192.168.50.251:8443"
	I1114 15:55:02.454290  876065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:55:02.463352  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.463410  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.474007  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.474025  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.474065  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.484826  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.985519  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.985595  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.998224  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.485905  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.486059  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.499281  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.985805  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.985925  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.998086  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:00.819591  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:02.773550  876668 pod_ready.go:92] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.773573  876668 pod_ready.go:81] duration metric: took 6.036790568s waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.773582  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778746  876668 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.778764  876668 pod_ready.go:81] duration metric: took 5.176465ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778772  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784332  876668 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.784353  876668 pod_ready.go:81] duration metric: took 5.572815ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784366  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789492  876668 pod_ready.go:92] pod "kube-proxy-zpchs" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.789514  876668 pod_ready.go:81] duration metric: took 5.139759ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789524  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796606  876668 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.796628  876668 pod_ready.go:81] duration metric: took 7.097079ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796639  876668 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.454894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.956449  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.485284  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.485387  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.498240  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:04.985846  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.985936  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.998901  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.485250  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.485365  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.497261  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.985411  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.985511  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.997656  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.485227  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.485332  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.497310  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.985893  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.985977  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.997585  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.485903  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.486001  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.498532  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.985881  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.985958  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.997898  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.485400  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.485512  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.497446  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.985912  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.986015  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.998121  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.081742  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:07.082515  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.580987  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:06.957307  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.455227  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.485641  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.485735  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.498347  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:09.985970  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.986073  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.997958  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.485503  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.485600  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.497407  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.985577  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.985655  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.998624  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.485146  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.485250  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.497837  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.985423  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.985551  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.997959  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:12.464381  876065 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:55:12.464449  876065 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:55:12.464478  876065 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:55:12.464582  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:12.505435  876065 cri.go:89] found id: ""
	I1114 15:55:12.505532  876065 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:55:12.522470  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:55:12.532890  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:55:12.532982  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542115  876065 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542141  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:12.684875  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:13.897464  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.21254145s)
	I1114 15:55:13.897509  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:11.582332  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.085102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:11.955438  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.455506  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.687822  876396 kubeadm.go:787] kubelet initialised
	I1114 15:55:14.687849  876396 kubeadm.go:788] duration metric: took 43.622781532s waiting for restarted kubelet to initialise ...
	I1114 15:55:14.687857  876396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:14.693560  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698796  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.698819  876396 pod_ready.go:81] duration metric: took 5.232669ms waiting for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698828  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703879  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.703903  876396 pod_ready.go:81] duration metric: took 5.067006ms waiting for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703916  876396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708064  876396 pod_ready.go:92] pod "etcd-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.708093  876396 pod_ready.go:81] duration metric: took 4.168333ms waiting for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708106  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713030  876396 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.713055  876396 pod_ready.go:81] duration metric: took 4.939899ms waiting for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713067  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087824  876396 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.087857  876396 pod_ready.go:81] duration metric: took 374.780312ms waiting for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087873  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.486984  876396 pod_ready.go:92] pod "kube-proxy-kw2ns" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.487011  876396 pod_ready.go:81] duration metric: took 399.130772ms waiting for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.487020  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886624  876396 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.886658  876396 pod_ready.go:81] duration metric: took 399.628757ms waiting for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886671  876396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.096314  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.174495  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.254647  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:55:14.254765  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.273596  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.788350  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.288506  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.788580  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.288476  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.787853  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.816380  876065 api_server.go:72] duration metric: took 2.561735945s to wait for apiserver process to appear ...
	I1114 15:55:16.816408  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:55:16.816428  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:16.582309  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:18.584599  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:16.957605  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:19.457613  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.541438  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.541473  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:20.541490  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:20.582790  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.582838  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:21.083891  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.089625  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.089658  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:21.583184  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.599539  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.599576  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:22.083098  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:22.088480  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 15:55:22.096517  876065 api_server.go:141] control plane version: v1.28.3
	I1114 15:55:22.096545  876065 api_server.go:131] duration metric: took 5.280130119s to wait for apiserver health ...
	I1114 15:55:22.096558  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:22.096568  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:22.098612  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:55:18.194723  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.195126  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.196472  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.100184  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:55:22.125049  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:55:22.150893  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:55:22.163922  876065 system_pods.go:59] 8 kube-system pods found
	I1114 15:55:22.163958  876065 system_pods.go:61] "coredns-5dd5756b68-n77fz" [e2f5ce73-a65e-40da-b554-c929f093a1a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:55:22.163970  876065 system_pods.go:61] "etcd-no-preload-490998" [01e272b5-4463-431d-8ed1-f561a90b667d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:55:22.163983  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [529f79fd-eae5-44e9-971d-b3ecb5ed025d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:55:22.163989  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ea299234-2456-4171-bac0-8e8ff4998596] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:55:22.163994  876065 system_pods.go:61] "kube-proxy-6hqk5" [7233dd72-138c-4148-834b-2dcb83a4cf00] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:55:22.163999  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [666e8a03-50b1-4b08-84f3-c3c6ec8a5452] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:55:22.164005  876065 system_pods.go:61] "metrics-server-57f55c9bc5-6lg6h" [7afa1e38-c64c-4d03-9b00-5765e7e251ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:55:22.164036  876065 system_pods.go:61] "storage-provisioner" [1090ed8a-6424-4980-9ea7-b43e998d1eb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:55:22.164050  876065 system_pods.go:74] duration metric: took 13.132475ms to wait for pod list to return data ...
	I1114 15:55:22.164058  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:55:22.167930  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:55:22.168020  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 15:55:22.168033  876065 node_conditions.go:105] duration metric: took 3.969303ms to run NodePressure ...
	I1114 15:55:22.168055  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:22.456975  876065 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470174  876065 kubeadm.go:787] kubelet initialised
	I1114 15:55:22.470202  876065 kubeadm.go:788] duration metric: took 13.201285ms waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470216  876065 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:22.483150  876065 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:21.081628  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:23.083015  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:21.955808  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.455829  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.696004  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.195514  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.514847  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.519442  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.013526  876065 pod_ready.go:92] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:27.013584  876065 pod_ready.go:81] duration metric: took 4.530407487s waiting for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:27.013600  876065 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:29.032979  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:25.582366  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.080716  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.456123  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.955087  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:29.694646  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.194401  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:31.033810  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:33.033026  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.033058  876065 pod_ready.go:81] duration metric: took 6.019448696s waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.033071  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039148  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.039180  876065 pod_ready.go:81] duration metric: took 6.099138ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039194  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049651  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.049675  876065 pod_ready.go:81] duration metric: took 10.473938ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049685  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061928  876065 pod_ready.go:92] pod "kube-proxy-6hqk5" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.061971  876065 pod_ready.go:81] duration metric: took 12.277038ms waiting for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061984  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071422  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.071452  876065 pod_ready.go:81] duration metric: took 9.456301ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071465  876065 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:30.081625  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.082675  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.581547  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:30.955154  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.957772  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.454775  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.194959  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:36.195495  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.339391  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.340404  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.083295  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.584210  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.956659  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:38.696669  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.194485  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.838699  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.840605  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.081223  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.081468  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.454630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.455871  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:43.195172  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:45.195687  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.339878  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.838910  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.841677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.082382  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.582248  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.457525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.955133  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:47.695467  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.195263  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.339284  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.340315  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.082546  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.581238  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.955630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.454502  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.455395  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:52.694030  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:54.694593  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:56.695136  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.838685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.838864  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.581986  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.582037  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.582635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.955377  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.963166  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:01.195573  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.840578  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.338828  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.082323  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.582531  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.454214  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.454975  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:03.198457  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:05.694675  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.339632  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.340001  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.840358  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:07.082081  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:09.582483  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.455257  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.455373  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.457344  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.196641  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.693989  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.339845  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:13.839805  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.583615  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:14.083682  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.957092  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.456347  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.694792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.200049  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.339768  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:18.839853  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.583278  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:19.081994  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.954665  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.454724  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.697859  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.194201  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.194738  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.840457  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.339880  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:21.082759  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.581646  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.457299  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.954029  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.694448  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.696563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:25.342126  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:27.839304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.083724  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:28.582086  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.955572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.459642  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.194785  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.693765  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:30.339130  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:32.339361  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.083363  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.582213  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.955312  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.955576  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.694783  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:34.339538  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.839469  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.842444  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.081206  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.581263  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.457091  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.956262  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.195134  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:40.195875  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.343304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.839634  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.080021  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.081543  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.453768  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.455182  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.457368  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:42.694667  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.195018  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.197081  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:46.338815  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:48.339683  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.083139  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.582320  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.954718  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.455135  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:49.696028  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.194484  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.340708  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.845026  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.082635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.583485  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.455840  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.955079  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.194627  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.197158  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.338956  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.339983  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.081903  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.583102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.955380  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.956134  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.695165  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.196563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:59.340299  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.838688  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.839025  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:00.080983  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:02.582197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:04.583222  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.454473  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.455187  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.455628  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.694518  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.695324  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.839239  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.341567  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.081736  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.581889  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.954781  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.954913  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.194118  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.194688  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:12.195198  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.840317  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.582436  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.583580  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.955894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.459525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.195588  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.195922  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:15.339470  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:17.340059  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.081770  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.082006  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.954957  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.455211  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.695530  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.193801  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.839618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.839819  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:20.083348  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:22.581010  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.582114  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.958579  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.454848  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:23.196520  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:25.196779  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.339942  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.340928  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.841122  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.583453  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:29.082667  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.455784  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.954086  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:27.695279  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.194416  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.341608  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.343898  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.581417  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.583852  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.955148  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.455525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:32.693640  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:34.695191  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.194999  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.838294  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.838948  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:36.082181  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.582488  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.955108  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.454392  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:40.455291  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.195193  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.694849  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.839180  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.339359  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.081697  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:43.081876  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.455905  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.962584  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.194494  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:46.195239  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.840607  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.338846  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:45.582002  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.083197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.454539  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.455025  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.694661  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.695232  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.840392  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.580410  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.580961  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.581502  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:51.954903  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.454053  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:53.194450  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:55.196537  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.339997  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.839677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.080798  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.087078  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.454639  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:58.955200  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.696210  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:00.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:02.194961  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.339152  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.340037  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.838551  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.582808  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.084331  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.458365  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.955679  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.696770  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:07.195364  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:05.840151  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.340709  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.582153  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.083260  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.454599  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.458281  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.196674  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.696022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.839588  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.342479  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.583479  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:14.081451  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.954623  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.455233  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.147383  876220 pod_ready.go:81] duration metric: took 4m0.000589332s waiting for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	E1114 15:58:15.147416  876220 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:58:15.147446  876220 pod_ready.go:38] duration metric: took 4m11.626263996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:15.147477  876220 kubeadm.go:640] restartCluster took 4m32.524775831s
	W1114 15:58:15.147587  876220 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:58:15.147630  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:58:14.195824  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.696055  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.841115  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.341347  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.084839  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.582575  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.696792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:20.838749  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:22.840049  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.080598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.081173  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.694974  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:26.196317  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.340015  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.839312  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.081700  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.582564  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.582728  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.037182  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.889530708s)
	I1114 15:58:29.037253  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:29.052797  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:58:29.061624  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:58:29.070799  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:58:29.070848  876220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:58:29.303905  876220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:58:28.695122  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.696046  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.341383  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:32.341988  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:31.584191  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.082795  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:33.195568  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:35.695145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.839094  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.840873  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.086791  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:38.581233  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.234828  876220 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:58:40.234881  876220 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:58:40.234965  876220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:58:40.235127  876220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:58:40.235264  876220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:58:40.235361  876220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:58:40.237159  876220 out.go:204]   - Generating certificates and keys ...
	I1114 15:58:40.237276  876220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:58:40.237366  876220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:58:40.237511  876220 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:58:40.237608  876220 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:58:40.237697  876220 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:58:40.237791  876220 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:58:40.237883  876220 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:58:40.237975  876220 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:58:40.238066  876220 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:58:40.238161  876220 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:58:40.238213  876220 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:58:40.238283  876220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:58:40.238352  876220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:58:40.238422  876220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:58:40.238506  876220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:58:40.238582  876220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:58:40.238725  876220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:58:40.238816  876220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:58:40.240266  876220 out.go:204]   - Booting up control plane ...
	I1114 15:58:40.240404  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:58:40.240508  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:58:40.240593  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:58:40.240822  876220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:58:40.240958  876220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:58:40.241018  876220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:58:40.241226  876220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:58:40.241333  876220 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.509675 seconds
	I1114 15:58:40.241470  876220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:58:40.241658  876220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:58:40.241744  876220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:58:40.241979  876220 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-279880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:58:40.242054  876220 kubeadm.go:322] [bootstrap-token] Using token: 2hujph.0fcw82xd7gxidhsk
	I1114 15:58:40.243677  876220 out.go:204]   - Configuring RBAC rules ...
	I1114 15:58:40.243823  876220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:58:40.243942  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:58:40.244131  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:58:40.244252  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:58:40.244351  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:58:40.244464  876220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:58:40.244616  876220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:58:40.244673  876220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:58:40.244732  876220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:58:40.244762  876220 kubeadm.go:322] 
	I1114 15:58:40.244828  876220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:58:40.244835  876220 kubeadm.go:322] 
	I1114 15:58:40.244904  876220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:58:40.244913  876220 kubeadm.go:322] 
	I1114 15:58:40.244934  876220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:58:40.244982  876220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:58:40.245027  876220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:58:40.245033  876220 kubeadm.go:322] 
	I1114 15:58:40.245108  876220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:58:40.245128  876220 kubeadm.go:322] 
	I1114 15:58:40.245185  876220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:58:40.245195  876220 kubeadm.go:322] 
	I1114 15:58:40.245269  876220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:58:40.245376  876220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:58:40.245483  876220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:58:40.245493  876220 kubeadm.go:322] 
	I1114 15:58:40.245606  876220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:58:40.245700  876220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:58:40.245708  876220 kubeadm.go:322] 
	I1114 15:58:40.245828  876220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.245986  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:58:40.246023  876220 kubeadm.go:322] 	--control-plane 
	I1114 15:58:40.246036  876220 kubeadm.go:322] 
	I1114 15:58:40.246148  876220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:58:40.246158  876220 kubeadm.go:322] 
	I1114 15:58:40.246247  876220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.246364  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:58:40.246386  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:58:40.246394  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:58:40.248160  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:58:40.249669  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:58:40.299570  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:58:40.399662  876220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:58:40.399751  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=embed-certs-279880 minikube.k8s.io/updated_at=2023_11_14T15_58_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.399759  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.456044  876220 ops.go:34] apiserver oom_adj: -16
	I1114 15:58:40.674206  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.780887  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:37.695540  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.195681  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:39.338902  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.339264  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.339844  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.582722  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.082401  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.391744  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:41.892060  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.392311  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.892385  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.391523  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.892286  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.392103  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.891494  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:45.392324  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.695415  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.195275  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.842259  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.339758  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.582481  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.079990  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.891330  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.391723  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.892283  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.391436  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.891664  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.392116  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.892052  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.391957  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.892316  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:50.391756  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.696088  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.195252  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.195695  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.891614  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.391818  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.891371  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.391565  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.544346  876220 kubeadm.go:1081] duration metric: took 12.144659895s to wait for elevateKubeSystemPrivileges.
	I1114 15:58:52.544391  876220 kubeadm.go:406] StartCluster complete in 5m9.978264522s
	I1114 15:58:52.544428  876220 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.544541  876220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:58:52.547345  876220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.547635  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:58:52.547785  876220 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:58:52.547873  876220 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-279880"
	I1114 15:58:52.547886  876220 addons.go:69] Setting default-storageclass=true in profile "embed-certs-279880"
	I1114 15:58:52.547903  876220 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-279880"
	I1114 15:58:52.547907  876220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-279880"
	W1114 15:58:52.547915  876220 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:58:52.547951  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:58:52.547986  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548010  876220 addons.go:69] Setting metrics-server=true in profile "embed-certs-279880"
	I1114 15:58:52.548027  876220 addons.go:231] Setting addon metrics-server=true in "embed-certs-279880"
	W1114 15:58:52.548038  876220 addons.go:240] addon metrics-server should already be in state true
	I1114 15:58:52.548083  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548508  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548612  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548844  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.568396  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I1114 15:58:52.568429  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I1114 15:58:52.568402  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I1114 15:58:52.569005  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569019  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569009  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569581  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569612  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.569772  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569796  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.570042  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570183  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.570699  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.570718  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.570742  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.570723  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.571364  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.571943  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.571975  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.575936  876220 addons.go:231] Setting addon default-storageclass=true in "embed-certs-279880"
	W1114 15:58:52.575961  876220 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:58:52.575996  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.576368  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.576412  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.588007  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I1114 15:58:52.588767  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.589487  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.589505  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.589943  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.590164  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.591841  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I1114 15:58:52.592269  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.592610  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.594453  876220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:58:52.593100  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.594839  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I1114 15:58:52.595836  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:58:52.595856  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:58:52.595874  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.595879  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.596356  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.596654  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.596683  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.597179  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.597199  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.597596  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.598225  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.598250  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.598972  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599389  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.599412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599655  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.599823  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.599971  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.600085  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.601301  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.603202  876220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:58:52.604691  876220 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.604701  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:58:52.604714  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.607585  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.607911  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.607942  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.608138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.608309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.608450  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.608586  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.614716  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I1114 15:58:52.615047  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.615462  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.615503  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.615849  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.616006  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.617386  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.617630  876220 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.617647  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:58:52.617666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.620337  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620656  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.620700  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620951  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.621103  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.621252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.621374  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.636800  876220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-279880" context rescaled to 1 replicas
	I1114 15:58:52.636844  876220 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:58:52.638665  876220 out.go:177] * Verifying Kubernetes components...
	I1114 15:58:50.340524  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.341233  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.080611  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.081851  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.582577  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.640094  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:52.829938  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.840140  876220 node_ready.go:35] waiting up to 6m0s for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.840653  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:58:52.858164  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.877415  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:58:52.877448  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:58:52.900588  876220 node_ready.go:49] node "embed-certs-279880" has status "Ready":"True"
	I1114 15:58:52.900614  876220 node_ready.go:38] duration metric: took 60.432125ms waiting for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.900624  876220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:52.972955  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:53.009532  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:58:53.009564  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:58:53.064247  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:53.064283  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:58:53.168472  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:54.543952  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.713966912s)
	I1114 15:58:54.544016  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544029  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544312  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544332  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.544343  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544374  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544650  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544697  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.569577  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.728879408s)
	I1114 15:58:54.569603  876220 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 15:58:54.572090  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.572118  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.572396  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.572420  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063126  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.20491351s)
	I1114 15:58:55.063197  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063218  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063551  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063572  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063583  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063596  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063609  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.063888  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063910  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.228754  876220 pod_ready.go:102] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:55.671980  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.503435235s)
	I1114 15:58:55.672050  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672066  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672415  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672481  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672502  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672514  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.672777  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672795  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672807  876220 addons.go:467] Verifying addon metrics-server=true in "embed-certs-279880"
	I1114 15:58:55.674712  876220 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:58:55.676182  876220 addons.go:502] enable addons completed in 3.128402943s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:58:54.695084  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.696106  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.844023  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:57.338618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.660605  876220 pod_ready.go:92] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.660642  876220 pod_ready.go:81] duration metric: took 3.687643856s waiting for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.660659  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671773  876220 pod_ready.go:92] pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.671803  876220 pod_ready.go:81] duration metric: took 11.134131ms waiting for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671817  876220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679179  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.679212  876220 pod_ready.go:81] duration metric: took 7.385218ms waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679224  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691696  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.691721  876220 pod_ready.go:81] duration metric: took 12.488161ms waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691734  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704134  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.704153  876220 pod_ready.go:81] duration metric: took 12.411686ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704161  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950181  876220 pod_ready.go:92] pod "kube-proxy-qdppd" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:57.950213  876220 pod_ready.go:81] duration metric: took 1.246044532s waiting for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950226  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237122  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:58.237150  876220 pod_ready.go:81] duration metric: took 286.915812ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237158  876220 pod_ready.go:38] duration metric: took 5.336525686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:58.237177  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:58:58.237227  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:58:58.260115  876220 api_server.go:72] duration metric: took 5.623228202s to wait for apiserver process to appear ...
	I1114 15:58:58.260147  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:58:58.260169  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:58:58.265361  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:58:58.266889  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:58:58.266918  876220 api_server.go:131] duration metric: took 6.76351ms to wait for apiserver health ...
	I1114 15:58:58.266938  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:58:58.439329  876220 system_pods.go:59] 9 kube-system pods found
	I1114 15:58:58.439362  876220 system_pods.go:61] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.439367  876220 system_pods.go:61] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.439371  876220 system_pods.go:61] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.439375  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.439379  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.439383  876220 system_pods.go:61] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.439387  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.439395  876220 system_pods.go:61] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.439402  876220 system_pods.go:61] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.439412  876220 system_pods.go:74] duration metric: took 172.465662ms to wait for pod list to return data ...
	I1114 15:58:58.439426  876220 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:58:58.637240  876220 default_sa.go:45] found service account: "default"
	I1114 15:58:58.637269  876220 default_sa.go:55] duration metric: took 197.834816ms for default service account to be created ...
	I1114 15:58:58.637278  876220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:58:58.840945  876220 system_pods.go:86] 9 kube-system pods found
	I1114 15:58:58.840976  876220 system_pods.go:89] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.840984  876220 system_pods.go:89] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.840990  876220 system_pods.go:89] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.840996  876220 system_pods.go:89] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.841001  876220 system_pods.go:89] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.841008  876220 system_pods.go:89] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.841014  876220 system_pods.go:89] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.841024  876220 system_pods.go:89] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.841032  876220 system_pods.go:89] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.841046  876220 system_pods.go:126] duration metric: took 203.761925ms to wait for k8s-apps to be running ...
	I1114 15:58:58.841058  876220 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:58:58.841143  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:58.857376  876220 system_svc.go:56] duration metric: took 16.307402ms WaitForService to wait for kubelet.
	I1114 15:58:58.857414  876220 kubeadm.go:581] duration metric: took 6.220529321s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:58:58.857439  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:58:59.036083  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:58:59.036112  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:58:59.036123  876220 node_conditions.go:105] duration metric: took 178.67985ms to run NodePressure ...
	I1114 15:58:59.036136  876220 start.go:228] waiting for startup goroutines ...
	I1114 15:58:59.036142  876220 start.go:233] waiting for cluster config update ...
	I1114 15:58:59.036152  876220 start.go:242] writing updated cluster config ...
	I1114 15:58:59.036464  876220 ssh_runner.go:195] Run: rm -f paused
	I1114 15:58:59.092065  876220 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:58:59.093827  876220 out.go:177] * Done! kubectl is now configured to use "embed-certs-279880" cluster and "default" namespace by default
	I1114 15:58:57.082065  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.082525  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:58.696271  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.195205  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.339863  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.839918  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.582598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:02.796920  876668 pod_ready.go:81] duration metric: took 4m0.000259164s waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:02.796965  876668 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:02.796978  876668 pod_ready.go:38] duration metric: took 4m6.075965552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:02.796999  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:02.797042  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:02.797123  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:02.851170  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:02.851199  876668 cri.go:89] found id: ""
	I1114 15:59:02.851210  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:02.851271  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.857251  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:02.857323  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:02.904914  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:02.904939  876668 cri.go:89] found id: ""
	I1114 15:59:02.904947  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:02.904994  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.909276  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:02.909350  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:02.944708  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:02.944778  876668 cri.go:89] found id: ""
	I1114 15:59:02.944789  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:02.944856  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.949260  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:02.949334  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:02.986830  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:02.986858  876668 cri.go:89] found id: ""
	I1114 15:59:02.986868  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:02.986928  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.991432  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:02.991511  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:03.028072  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.028101  876668 cri.go:89] found id: ""
	I1114 15:59:03.028113  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:03.028177  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.032678  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:03.032771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:03.070651  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.070671  876668 cri.go:89] found id: ""
	I1114 15:59:03.070679  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:03.070727  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.075127  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:03.075192  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:03.117191  876668 cri.go:89] found id: ""
	I1114 15:59:03.117221  876668 logs.go:284] 0 containers: []
	W1114 15:59:03.117229  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:03.117235  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:03.117300  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:03.163227  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.163255  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:03.163260  876668 cri.go:89] found id: ""
	I1114 15:59:03.163269  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:03.163322  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.167410  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.171362  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:03.171389  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:03.330078  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:03.330113  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:03.372318  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:03.372349  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.414474  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:03.414506  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.471989  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:03.472025  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.516802  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:03.516834  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:03.532186  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:03.532218  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:03.987984  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:03.988029  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:04.045261  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:04.045305  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:04.095816  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:04.095853  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:04.148084  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:04.148132  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:04.200992  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:04.201039  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:04.239171  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:04.239207  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:03.695077  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.194941  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:04.339648  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.839045  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:08.841546  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.787847  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:06.808020  876668 api_server.go:72] duration metric: took 4m16.941929205s to wait for apiserver process to appear ...
	I1114 15:59:06.808052  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:06.808087  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:06.808146  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:06.849716  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:06.849747  876668 cri.go:89] found id: ""
	I1114 15:59:06.849758  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:06.849816  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.854025  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:06.854093  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:06.894331  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:06.894361  876668 cri.go:89] found id: ""
	I1114 15:59:06.894371  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:06.894430  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.899047  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:06.899137  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:06.947156  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:06.947194  876668 cri.go:89] found id: ""
	I1114 15:59:06.947206  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:06.947279  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.952972  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:06.953045  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:06.997872  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:06.997899  876668 cri.go:89] found id: ""
	I1114 15:59:06.997910  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:06.997972  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.002282  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:07.002362  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:07.041689  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.041722  876668 cri.go:89] found id: ""
	I1114 15:59:07.041734  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:07.041800  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.045730  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:07.045797  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:07.091996  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.092021  876668 cri.go:89] found id: ""
	I1114 15:59:07.092032  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:07.092094  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.100690  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:07.100771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:07.141635  876668 cri.go:89] found id: ""
	I1114 15:59:07.141670  876668 logs.go:284] 0 containers: []
	W1114 15:59:07.141681  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:07.141689  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:07.141750  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:07.184807  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:07.184839  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.184847  876668 cri.go:89] found id: ""
	I1114 15:59:07.184857  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:07.184920  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.189361  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.197666  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:07.197694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:07.243532  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:07.243568  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:07.284479  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:07.284520  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.326309  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:07.326341  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:07.794035  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:07.794077  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.836008  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:07.836050  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:07.886157  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:07.886192  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:07.930752  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:07.930795  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.983727  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:07.983765  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:08.024969  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:08.025000  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:08.079050  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:08.079090  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:08.093653  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:08.093691  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:08.228823  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:08.228864  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:08.196022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.196145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:12.196843  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:11.340269  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:13.840055  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.780836  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:59:10.793555  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:59:10.794839  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:59:10.794868  876668 api_server.go:131] duration metric: took 3.986808086s to wait for apiserver health ...
	I1114 15:59:10.794878  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:10.794907  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:10.794989  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:10.842028  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:10.842050  876668 cri.go:89] found id: ""
	I1114 15:59:10.842059  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:10.842113  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.846938  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:10.847030  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:10.893360  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:10.893386  876668 cri.go:89] found id: ""
	I1114 15:59:10.893394  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:10.893443  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.899601  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:10.899669  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:10.949519  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:10.949542  876668 cri.go:89] found id: ""
	I1114 15:59:10.949550  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:10.949602  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.953875  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:10.953936  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:10.994565  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:10.994595  876668 cri.go:89] found id: ""
	I1114 15:59:10.994605  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:10.994659  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.999120  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:10.999187  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:11.039364  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.039392  876668 cri.go:89] found id: ""
	I1114 15:59:11.039403  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:11.039509  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.044115  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:11.044174  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:11.088803  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.088835  876668 cri.go:89] found id: ""
	I1114 15:59:11.088846  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:11.088917  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.094005  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:11.094076  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:11.145247  876668 cri.go:89] found id: ""
	I1114 15:59:11.145276  876668 logs.go:284] 0 containers: []
	W1114 15:59:11.145285  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:11.145294  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:11.145355  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:11.188916  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.188950  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.188957  876668 cri.go:89] found id: ""
	I1114 15:59:11.188967  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:11.189029  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.195578  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.200146  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:11.200174  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:11.240413  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:11.240458  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.290614  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:11.290648  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:11.638700  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:11.638743  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:11.654234  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:11.654267  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.709147  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:11.709184  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:11.751661  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:11.751701  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.796993  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:11.797041  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.841478  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:11.841510  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:11.972862  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:11.972902  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:12.019217  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:12.019260  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:12.073396  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:12.073443  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:12.142653  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:12.142694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:14.704129  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:59:14.704159  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.704167  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.704173  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.704179  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.704184  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.704191  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.704200  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.704207  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.704217  876668 system_pods.go:74] duration metric: took 3.909331461s to wait for pod list to return data ...
	I1114 15:59:14.704231  876668 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:14.706920  876668 default_sa.go:45] found service account: "default"
	I1114 15:59:14.706944  876668 default_sa.go:55] duration metric: took 2.702527ms for default service account to be created ...
	I1114 15:59:14.706954  876668 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:14.714049  876668 system_pods.go:86] 8 kube-system pods found
	I1114 15:59:14.714080  876668 system_pods.go:89] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.714089  876668 system_pods.go:89] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.714096  876668 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.714101  876668 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.714106  876668 system_pods.go:89] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.714113  876668 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.714128  876668 system_pods.go:89] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.714142  876668 system_pods.go:89] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.714152  876668 system_pods.go:126] duration metric: took 7.191238ms to wait for k8s-apps to be running ...
	I1114 15:59:14.714174  876668 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:59:14.714231  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:14.734987  876668 system_svc.go:56] duration metric: took 20.804278ms WaitForService to wait for kubelet.
	I1114 15:59:14.735015  876668 kubeadm.go:581] duration metric: took 4m24.868931304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:59:14.735038  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:59:14.737844  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:59:14.737868  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:59:14.737878  876668 node_conditions.go:105] duration metric: took 2.834918ms to run NodePressure ...
	I1114 15:59:14.737889  876668 start.go:228] waiting for startup goroutines ...
	I1114 15:59:14.737895  876668 start.go:233] waiting for cluster config update ...
	I1114 15:59:14.737905  876668 start.go:242] writing updated cluster config ...
	I1114 15:59:14.738157  876668 ssh_runner.go:195] Run: rm -f paused
	I1114 15:59:14.791076  876668 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:59:14.793853  876668 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-529430" cluster and "default" namespace by default
	I1114 15:59:14.694842  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:15.887599  876396 pod_ready.go:81] duration metric: took 4m0.000892827s waiting for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:15.887641  876396 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:15.887664  876396 pod_ready.go:38] duration metric: took 4m1.199797165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:15.887694  876396 kubeadm.go:640] restartCluster took 5m7.501574769s
	W1114 15:59:15.887782  876396 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:15.887859  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:16.340114  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:18.340157  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:20.901839  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.013944828s)
	I1114 15:59:20.901933  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:20.915929  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:20.928081  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:20.937656  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:20.937756  876396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1114 15:59:20.998439  876396 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1114 15:59:20.998593  876396 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:21.145429  876396 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:21.145639  876396 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:21.145777  876396 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:21.387825  876396 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:21.388897  876396 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:21.396490  876396 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1114 15:59:21.518176  876396 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:21.520261  876396 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:21.520398  876396 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:21.520496  876396 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:21.520590  876396 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:21.520686  876396 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:21.520797  876396 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:21.520918  876396 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:21.521009  876396 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:21.521434  876396 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:21.521822  876396 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:21.522333  876396 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:21.522651  876396 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:21.522730  876396 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:21.707438  876396 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:21.890929  876396 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:22.058077  876396 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:22.234616  876396 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:22.235636  876396 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:22.237626  876396 out.go:204]   - Booting up control plane ...
	I1114 15:59:22.237743  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:22.241964  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:22.242976  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:22.244745  876396 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:22.248349  876396 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:20.341685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:22.838566  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:25.337887  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:27.341368  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.256998  876396 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005833 seconds
	I1114 15:59:32.257145  876396 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:32.272061  876396 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:32.797161  876396 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:32.797367  876396 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-842105 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 15:59:33.314721  876396 kubeadm.go:322] [bootstrap-token] Using token: 04dlot.9kpu87sb3ajm8dfs
	I1114 15:59:33.316454  876396 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:33.316628  876396 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:33.324455  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:33.328877  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:33.335460  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:33.339307  876396 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:33.422742  876396 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:33.757796  876396 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:33.759150  876396 kubeadm.go:322] 
	I1114 15:59:33.759248  876396 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:33.759281  876396 kubeadm.go:322] 
	I1114 15:59:33.759442  876396 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:33.759459  876396 kubeadm.go:322] 
	I1114 15:59:33.759495  876396 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:33.759577  876396 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:33.759647  876396 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:33.759657  876396 kubeadm.go:322] 
	I1114 15:59:33.759726  876396 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:33.759828  876396 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:33.759922  876396 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:33.759931  876396 kubeadm.go:322] 
	I1114 15:59:33.760050  876396 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1114 15:59:33.760143  876396 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:33.760154  876396 kubeadm.go:322] 
	I1114 15:59:33.760239  876396 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760360  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:33.760397  876396 kubeadm.go:322]     --control-plane 	  
	I1114 15:59:33.760408  876396 kubeadm.go:322] 
	I1114 15:59:33.760517  876396 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:33.760527  876396 kubeadm.go:322] 
	I1114 15:59:33.760624  876396 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760781  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:33.764918  876396 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:33.764993  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:59:33.765010  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:33.767708  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:29.839580  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.339612  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:33.072424  876065 pod_ready.go:81] duration metric: took 4m0.000921839s waiting for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:33.072553  876065 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:33.072606  876065 pod_ready.go:38] duration metric: took 4m10.602378093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:33.072664  876065 kubeadm.go:640] restartCluster took 4m30.632686786s
	W1114 15:59:33.072782  876065 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:33.073057  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:33.769398  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:33.781327  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:33.810672  876396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:33.810839  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:33.810927  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=old-k8s-version-842105 minikube.k8s.io/updated_at=2023_11_14T15_59_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.181391  876396 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:34.181528  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.301381  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.919870  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.419262  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.919637  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.419780  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.919453  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.420046  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.919605  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.419845  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.919474  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.419303  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.919616  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.419633  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.919220  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.419298  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.919396  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.420042  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.919886  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.419274  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.920217  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.419952  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.919511  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.419619  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.919762  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.420141  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.919676  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.261922  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.188828866s)
	I1114 15:59:47.262031  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:47.276268  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:47.285701  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:47.294481  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:47.294540  876065 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:59:47.348856  876065 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:59:47.348959  876065 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:47.530233  876065 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:47.530413  876065 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:47.530581  876065 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:47.784516  876065 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:47.420108  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.920005  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.419707  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.527158  876396 kubeadm.go:1081] duration metric: took 14.716377346s to wait for elevateKubeSystemPrivileges.
	I1114 15:59:48.527193  876396 kubeadm.go:406] StartCluster complete in 5m40.211957984s
	I1114 15:59:48.527213  876396 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.527323  876396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:59:48.529723  876396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.530058  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:59:48.530134  876396 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:59:48.530222  876396 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530248  876396 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-842105"
	W1114 15:59:48.530257  876396 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:59:48.530256  876396 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530285  876396 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-842105"
	W1114 15:59:48.530297  876396 addons.go:240] addon metrics-server should already be in state true
	I1114 15:59:48.530321  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530342  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530354  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:59:48.530429  876396 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530457  876396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-842105"
	I1114 15:59:48.530764  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530793  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530805  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530795  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530818  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530822  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.549568  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1114 15:59:48.549642  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I1114 15:59:48.550081  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550240  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550734  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550755  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.550866  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550887  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.551164  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551425  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551622  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.551766  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.551813  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.552539  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1114 15:59:48.553028  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.554044  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.554063  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.554522  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.555069  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555106  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.555404  876396 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-842105"
	W1114 15:59:48.555470  876396 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:59:48.555516  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.555924  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555961  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.576876  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I1114 15:59:48.576912  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I1114 15:59:48.576878  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1114 15:59:48.577223  876396 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-842105" context rescaled to 1 replicas
	I1114 15:59:48.577266  876396 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:59:48.579711  876396 out.go:177] * Verifying Kubernetes components...
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577672  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.581751  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:48.580402  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581791  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580422  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581852  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580432  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581919  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.582238  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582286  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582314  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582439  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.582735  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.582751  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.583264  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.584865  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.586792  876396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:59:48.585415  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.588364  876396 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.588378  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:59:48.588398  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.592854  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.594307  876396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:59:47.786524  876065 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:47.786668  876065 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:47.786744  876065 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:47.786843  876065 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:47.786912  876065 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:47.787108  876065 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:47.787698  876065 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:47.788301  876065 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:47.788930  876065 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:47.789533  876065 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:47.790115  876065 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:47.790449  876065 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:47.790523  876065 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:47.975724  876065 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:48.056071  876065 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:48.340177  876065 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:48.733230  876065 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:48.734350  876065 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:48.738369  876065 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:48.740013  876065 out.go:204]   - Booting up control plane ...
	I1114 15:59:48.740143  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:48.740271  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:48.743856  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:48.763450  876065 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:48.764688  876065 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:48.764768  876065 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:59:48.932286  876065 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:48.592918  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.593079  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.595739  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:59:48.595754  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:59:48.595776  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.595826  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.595852  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.596957  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.597212  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.599011  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599448  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.599710  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599755  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.599975  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.600142  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.600304  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.607351  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I1114 15:59:48.607929  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.608484  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.608509  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.608998  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.609237  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.610958  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.611196  876396 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.611210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:59:48.611228  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.613709  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614297  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.614322  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614366  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.614539  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.614631  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.614711  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.708399  876396 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.708481  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:59:48.715087  876396 node_ready.go:49] node "old-k8s-version-842105" has status "Ready":"True"
	I1114 15:59:48.715111  876396 node_ready.go:38] duration metric: took 6.675707ms waiting for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.715124  876396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718748  876396 pod_ready.go:38] duration metric: took 3.605786ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718790  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:48.718857  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:48.750191  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.773186  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:59:48.773210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:59:48.788782  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.847057  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:59:48.847090  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:59:48.905401  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:48.905442  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:59:48.986582  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:49.606449  876396 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1114 15:59:49.606451  876396 api_server.go:72] duration metric: took 1.029145444s to wait for apiserver process to appear ...
	I1114 15:59:49.606506  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:49.606530  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:59:49.709702  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.709732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.710100  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.710130  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.710144  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.710153  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.711953  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.711985  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.711994  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.755976  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:59:49.756696  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.756719  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.757036  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.757103  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.757121  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.757390  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:59:49.757410  876396 api_server.go:131] duration metric: took 150.89717ms to wait for apiserver health ...
	I1114 15:59:49.757447  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:49.763460  876396 system_pods.go:59] 2 kube-system pods found
	I1114 15:59:49.763487  876396 system_pods.go:61] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.763497  876396 system_pods.go:61] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.763509  876396 system_pods.go:74] duration metric: took 6.051168ms to wait for pod list to return data ...
	I1114 15:59:49.763518  876396 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:49.776313  876396 default_sa.go:45] found service account: "default"
	I1114 15:59:49.776341  876396 default_sa.go:55] duration metric: took 12.814566ms for default service account to be created ...
	I1114 15:59:49.776351  876396 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:49.782462  876396 system_pods.go:86] 2 kube-system pods found
	I1114 15:59:49.782502  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.782518  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.782544  876396 retry.go:31] will retry after 311.640315ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.157150  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368304542s)
	I1114 15:59:50.157269  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157286  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.157688  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.157711  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.157730  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157743  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.158219  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.158270  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.169219  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.169264  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.169275  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.169282  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending
	I1114 15:59:50.169304  876396 retry.go:31] will retry after 335.621385ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.357400  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.370764048s)
	I1114 15:59:50.357474  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.357494  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.359782  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.359789  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.359811  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.359829  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.359840  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.360228  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.360264  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.360285  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.360333  876396 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-842105"
	I1114 15:59:50.362545  876396 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:59:50.364302  876396 addons.go:502] enable addons completed in 1.834168315s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:59:50.616547  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.616597  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.616608  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.616623  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.616645  876396 retry.go:31] will retry after 349.737645ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.971245  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.971286  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.971298  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.971312  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.971333  876396 retry.go:31] will retry after 562.981893ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:51.541777  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:51.541822  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:51.541849  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:51.541862  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:51.541870  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:51.541892  876396 retry.go:31] will retry after 617.692214ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:52.166157  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.166192  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.166199  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.166207  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.166211  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.166227  876396 retry.go:31] will retry after 671.968353ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:52.844235  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.844269  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.844276  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.844285  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.844290  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.844309  876396 retry.go:31] will retry after 955.353451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:53.814593  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:53.814626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:53.814636  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:53.814651  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:53.814661  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:53.814680  876396 retry.go:31] will retry after 1.306938168s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:55.127401  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:55.127436  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:55.127445  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:55.127457  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:55.127465  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:55.127488  876396 retry.go:31] will retry after 1.627615182s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.759304  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:56.759339  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:56.759345  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:56.759353  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:56.759358  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:56.759373  876396 retry.go:31] will retry after 2.046606031s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.936792  876065 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004387 seconds
	I1114 15:59:56.936992  876065 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:56.965969  876065 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:57.504894  876065 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:57.505171  876065 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-490998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:59:58.021451  876065 kubeadm.go:322] [bootstrap-token] Using token: 3x3ma3.qtutj9fi1nmgzc3r
	I1114 15:59:58.023064  876065 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:58.023220  876065 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:58.028334  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:59:58.039638  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:58.043783  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:58.048814  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:58.061419  876065 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:58.075996  876065 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:59:58.328245  876065 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:58.435170  876065 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:58.436684  876065 kubeadm.go:322] 
	I1114 15:59:58.436781  876065 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:58.436796  876065 kubeadm.go:322] 
	I1114 15:59:58.436889  876065 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:58.436932  876065 kubeadm.go:322] 
	I1114 15:59:58.436988  876065 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:58.437091  876065 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:58.437155  876065 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:58.437176  876065 kubeadm.go:322] 
	I1114 15:59:58.437231  876065 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:59:58.437239  876065 kubeadm.go:322] 
	I1114 15:59:58.437281  876065 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:59:58.437288  876065 kubeadm.go:322] 
	I1114 15:59:58.437353  876065 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:58.437449  876065 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:58.437564  876065 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:58.437574  876065 kubeadm.go:322] 
	I1114 15:59:58.437684  876065 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:59:58.437800  876065 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:58.437816  876065 kubeadm.go:322] 
	I1114 15:59:58.437937  876065 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438087  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:58.438116  876065 kubeadm.go:322] 	--control-plane 
	I1114 15:59:58.438124  876065 kubeadm.go:322] 
	I1114 15:59:58.438194  876065 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:58.438202  876065 kubeadm.go:322] 
	I1114 15:59:58.438267  876065 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438355  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:58.442217  876065 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:58.442251  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:59:58.442263  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:58.444078  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:58.445560  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:58.467849  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:58.501795  876065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:58.501941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.501965  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=no-preload-490998 minikube.k8s.io/updated_at=2023_11_14T15_59_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.557314  876065 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:58.891105  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:59.006867  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.811870  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:58.811905  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:58.811912  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:58.811920  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:58.811924  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:58.811939  876396 retry.go:31] will retry after 2.166453413s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:00.984597  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:00.984626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:00.984632  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:00.984638  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:00.984643  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:00.984661  876396 retry.go:31] will retry after 2.339496963s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:59.620843  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.120941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.621244  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.121507  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.621512  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.121367  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.621449  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.120920  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.620857  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.329034  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:03.329061  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:03.329067  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:03.329074  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:03.329078  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:03.329097  876396 retry.go:31] will retry after 3.593700907s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:06.929268  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:06.929308  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:06.929316  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:06.929327  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:06.929335  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:06.929357  876396 retry.go:31] will retry after 4.929780079s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:04.121245  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:04.620976  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.120894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.621609  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.121209  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.621322  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.121613  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.620968  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.121482  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.621166  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.121032  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.620894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.120992  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.621306  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.121427  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.299388  876065 kubeadm.go:1081] duration metric: took 12.79751335s to wait for elevateKubeSystemPrivileges.
	I1114 16:00:11.299429  876065 kubeadm.go:406] StartCluster complete in 5m8.910317864s
	I1114 16:00:11.299489  876065 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.299594  876065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:00:11.301841  876065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.302097  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 16:00:11.302144  876065 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 16:00:11.302251  876065 addons.go:69] Setting storage-provisioner=true in profile "no-preload-490998"
	I1114 16:00:11.302268  876065 addons.go:69] Setting default-storageclass=true in profile "no-preload-490998"
	I1114 16:00:11.302287  876065 addons.go:231] Setting addon storage-provisioner=true in "no-preload-490998"
	W1114 16:00:11.302301  876065 addons.go:240] addon storage-provisioner should already be in state true
	I1114 16:00:11.302296  876065 addons.go:69] Setting metrics-server=true in profile "no-preload-490998"
	I1114 16:00:11.302327  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:00:11.302346  876065 addons.go:231] Setting addon metrics-server=true in "no-preload-490998"
	W1114 16:00:11.302360  876065 addons.go:240] addon metrics-server should already be in state true
	I1114 16:00:11.302361  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302408  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302287  876065 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-490998"
	I1114 16:00:11.302858  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302926  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302942  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302956  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302863  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.303043  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.323089  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I1114 16:00:11.323101  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1114 16:00:11.323750  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.323807  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.324339  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324362  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324554  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324577  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324806  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I1114 16:00:11.325059  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325120  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325172  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.325617  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.325652  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326120  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.326138  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.326359  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.326398  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326499  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.326665  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.330090  876065 addons.go:231] Setting addon default-storageclass=true in "no-preload-490998"
	W1114 16:00:11.330115  876065 addons.go:240] addon default-storageclass should already be in state true
	I1114 16:00:11.330144  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.330381  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.330415  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.347198  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I1114 16:00:11.347385  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I1114 16:00:11.347562  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I1114 16:00:11.347721  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347785  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347897  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.348216  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348232  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348346  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348358  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348366  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348370  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348593  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348729  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348878  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348947  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349143  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349223  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.349270  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.351308  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.353786  876065 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 16:00:11.352409  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.355097  876065 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.355119  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 16:00:11.355141  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.356613  876065 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 16:00:11.357928  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 16:00:11.357949  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 16:00:11.357969  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.358548  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359421  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.359450  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359652  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.359922  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.360221  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.360379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.362075  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362508  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.362532  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362831  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.363041  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.363234  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.363390  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.379820  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I1114 16:00:11.380297  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.380905  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.380935  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.381326  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.381573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.383433  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.383722  876065 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.383741  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 16:00:11.383762  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.386432  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.386813  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.386845  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.387062  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.387311  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.387490  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.387661  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.450418  876065 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-490998" context rescaled to 1 replicas
	I1114 16:00:11.450472  876065 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:00:11.452499  876065 out.go:177] * Verifying Kubernetes components...
	I1114 16:00:11.864833  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:11.864867  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:11.864875  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:11.864884  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:11.864891  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:11.864918  876396 retry.go:31] will retry after 6.141765036s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:11.454141  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:11.560863  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.582400  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 16:00:11.582423  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 16:00:11.596910  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.626625  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 16:00:11.626652  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 16:00:11.634166  876065 node_ready.go:35] waiting up to 6m0s for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.634309  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 16:00:11.706391  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.706421  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 16:00:11.737914  876065 node_ready.go:49] node "no-preload-490998" has status "Ready":"True"
	I1114 16:00:11.737955  876065 node_ready.go:38] duration metric: took 103.74965ms waiting for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.737969  876065 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:11.795522  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.910850  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:13.838426  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.277507449s)
	I1114 16:00:13.838488  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838481  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.241527225s)
	I1114 16:00:13.838530  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838555  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838501  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838599  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.204200469s)
	I1114 16:00:13.838636  876065 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1114 16:00:13.838941  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.838992  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839001  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839008  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839016  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.839032  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839047  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839057  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839066  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841315  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841335  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.841398  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841418  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855083  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.059516605s)
	I1114 16:00:13.855146  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855169  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855524  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855572  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855588  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855600  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855612  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855921  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855949  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855961  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855979  876065 addons.go:467] Verifying addon metrics-server=true in "no-preload-490998"
	I1114 16:00:13.864145  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.864168  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.864444  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.864480  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.864491  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.867459  876065 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1114 16:00:13.868861  876065 addons.go:502] enable addons completed in 2.566733189s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1114 16:00:14.067240  876065 pod_ready.go:97] error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067289  876065 pod_ready.go:81] duration metric: took 2.15639988s waiting for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	E1114 16:00:14.067306  876065 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067315  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140385  876065 pod_ready.go:92] pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.140412  876065 pod_ready.go:81] duration metric: took 2.07308909s waiting for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140422  876065 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145818  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.145837  876065 pod_ready.go:81] duration metric: took 5.409163ms waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145845  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150850  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.150868  876065 pod_ready.go:81] duration metric: took 5.017013ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150877  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155895  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.155919  876065 pod_ready.go:81] duration metric: took 5.034132ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155931  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254239  876065 pod_ready.go:92] pod "kube-proxy-9nc8j" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.254270  876065 pod_ready.go:81] duration metric: took 98.331009ms waiting for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254282  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653014  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.653041  876065 pod_ready.go:81] duration metric: took 398.751468ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653049  876065 pod_ready.go:38] duration metric: took 4.915065516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:16.653066  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 16:00:16.653118  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 16:00:16.670396  876065 api_server.go:72] duration metric: took 5.219889322s to wait for apiserver process to appear ...
	I1114 16:00:16.670430  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 16:00:16.670450  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 16:00:16.675936  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 16:00:16.677570  876065 api_server.go:141] control plane version: v1.28.3
	I1114 16:00:16.677592  876065 api_server.go:131] duration metric: took 7.155742ms to wait for apiserver health ...
	I1114 16:00:16.677601  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 16:00:16.858468  876065 system_pods.go:59] 8 kube-system pods found
	I1114 16:00:16.858500  876065 system_pods.go:61] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:16.858505  876065 system_pods.go:61] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:16.858509  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:16.858514  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:16.858518  876065 system_pods.go:61] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:16.858522  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:16.858529  876065 system_pods.go:61] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:16.858534  876065 system_pods.go:61] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:16.858543  876065 system_pods.go:74] duration metric: took 180.935707ms to wait for pod list to return data ...
	I1114 16:00:16.858551  876065 default_sa.go:34] waiting for default service account to be created ...
	I1114 16:00:17.053423  876065 default_sa.go:45] found service account: "default"
	I1114 16:00:17.053478  876065 default_sa.go:55] duration metric: took 194.91891ms for default service account to be created ...
	I1114 16:00:17.053491  876065 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 16:00:17.256504  876065 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:17.256539  876065 system_pods.go:89] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:17.256547  876065 system_pods.go:89] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:17.256554  876065 system_pods.go:89] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:17.256561  876065 system_pods.go:89] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:17.256567  876065 system_pods.go:89] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:17.256572  876065 system_pods.go:89] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:17.256582  876065 system_pods.go:89] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:17.256589  876065 system_pods.go:89] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:17.256602  876065 system_pods.go:126] duration metric: took 203.104027ms to wait for k8s-apps to be running ...
	I1114 16:00:17.256615  876065 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:17.256682  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:17.273098  876065 system_svc.go:56] duration metric: took 16.455935ms WaitForService to wait for kubelet.
	I1114 16:00:17.273135  876065 kubeadm.go:581] duration metric: took 5.822636312s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:17.273162  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:17.453601  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:17.453635  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:17.453675  876065 node_conditions.go:105] duration metric: took 180.505934ms to run NodePressure ...
	I1114 16:00:17.453692  876065 start.go:228] waiting for startup goroutines ...
	I1114 16:00:17.453706  876065 start.go:233] waiting for cluster config update ...
	I1114 16:00:17.453748  876065 start.go:242] writing updated cluster config ...
	I1114 16:00:17.454022  876065 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:17.505999  876065 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 16:00:17.509514  876065 out.go:177] * Done! kubectl is now configured to use "no-preload-490998" cluster and "default" namespace by default
	I1114 16:00:18.012940  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:18.012980  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:18.012988  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:18.012998  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:18.013007  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:18.013032  876396 retry.go:31] will retry after 7.087138718s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:25.105773  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:25.105804  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:25.105809  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:25.105817  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:25.105822  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:25.105842  876396 retry.go:31] will retry after 8.539395127s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:33.651084  876396 system_pods.go:86] 6 kube-system pods found
	I1114 16:00:33.651116  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:33.651121  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:33.651125  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:33.651129  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:33.651136  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:33.651141  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:33.651159  876396 retry.go:31] will retry after 10.428154724s: missing components: etcd, kube-apiserver
	I1114 16:00:44.086463  876396 system_pods.go:86] 7 kube-system pods found
	I1114 16:00:44.086496  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:44.086501  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:44.086506  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:44.086511  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:44.086515  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:44.086522  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:44.086527  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:44.086546  876396 retry.go:31] will retry after 10.535877375s: missing components: kube-apiserver
	I1114 16:00:54.631194  876396 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:54.631230  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:54.631237  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:54.631244  876396 system_pods.go:89] "kube-apiserver-old-k8s-version-842105" [3035c074-63ca-4b23-a375-415210397d17] Running
	I1114 16:00:54.631252  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:54.631259  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:54.631265  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:54.631275  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:54.631291  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:54.631304  876396 system_pods.go:126] duration metric: took 1m4.854946282s to wait for k8s-apps to be running ...
	I1114 16:00:54.631317  876396 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:54.631470  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:54.648616  876396 system_svc.go:56] duration metric: took 17.286024ms WaitForService to wait for kubelet.
	I1114 16:00:54.648650  876396 kubeadm.go:581] duration metric: took 1m6.071350783s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:54.648677  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:54.652020  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:54.652055  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:54.652069  876396 node_conditions.go:105] duration metric: took 3.385579ms to run NodePressure ...
	I1114 16:00:54.652085  876396 start.go:228] waiting for startup goroutines ...
	I1114 16:00:54.652093  876396 start.go:233] waiting for cluster config update ...
	I1114 16:00:54.652106  876396 start.go:242] writing updated cluster config ...
	I1114 16:00:54.652418  876396 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:54.706394  876396 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1114 16:00:54.708374  876396 out.go:177] 
	W1114 16:00:54.709776  876396 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1114 16:00:54.711177  876396 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1114 16:00:54.712775  876396 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-842105" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:53:28 UTC, ends at Tue 2023-11-14 16:08:00 UTC. --
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.852090023Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=5dc8fa5c-e0bb-4870-9ad7-b86ce0fb9cab name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.852372627Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1699977512005299411,StartedAt:1699977513131677152,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e4f62415f16dde270e802807238601,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a1e4f62415f16dde270e802807238601/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a1e4f62415f16dde270e802807238601/containers/kube-scheduler/a45fb655,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-279880_a1e4f62415f16dde270e802807238601/kube-scheduler/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=5dc8fa5c-e0bb-4870-9ad7-b86ce0fb9cab name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.852741822Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=bceac8ec-87da-4f10-936c-3b8da617192e name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.852810978Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1699977511832995704,StartedAt:1699977513121147942,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26039813275e3110d741b46c8b90541,},Annotations:map[string]string{io.kubernetes.container.hash: 996cc199,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f26039813275e3110d741b46c8b90541/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f26039813275e3110d741b46c8b90541/containers/kube-apiserver/5bb20f3f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-279880_f26039813
275e3110d741b46c8b90541/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=bceac8ec-87da-4f10-936c-3b8da617192e name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.870656688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7e4fdbe0-2740-4258-bb11-bff3371c6d7f name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.870740774Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7e4fdbe0-2740-4258-bb11-bff3371c6d7f name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.872154492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e11df7e4-ec52-4de4-bfcb-37db86b8fe62 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.872682180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978080872667992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e11df7e4-ec52-4de4-bfcb-37db86b8fe62 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.873497253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=424b80fb-6e66-4e50-a9c9-cee0b50396cf name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.873567768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=424b80fb-6e66-4e50-a9c9-cee0b50396cf name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.873834028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992,PodSandboxId:a16a96152bc358a8c3fec8c6a96b5163e72e4b918e378bbf5334c6d87f6453ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977536643581968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3168b6ac-f288-4e1d-a4ce-78c4198debba,},Annotations:map[string]string{io.kubernetes.container.hash: 2276adff,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606,PodSandboxId:0cb501837f5b71df2a529b7e7f5653a541722785d0cad99aa8521ed5557f387d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977536201739520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdppd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddcb6130-1e2c-49b0-99de-b6b7d576d82c,},Annotations:map[string]string{io.kubernetes.container.hash: 965ba9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177,PodSandboxId:41e9a1ff99376bd5e3726daf30c53e821458b7b42570ce639fdedb3141cfae75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977535628469561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-42nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88175e14-09c2-4dc2-a56a-fa3bf71ae420,},Annotations:map[string]string{io.kubernetes.container.hash: fc333b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558,PodSandboxId:59f0ab2a002c1248a494bcd77c1280dc59b87b7cc8e4e8032acb7985faca402d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977512104150320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092ea65709ebacc65acf1f06e0b9e365,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66ab31e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2,PodSandboxId:64de30fc95549f64f97ef869e43fd4a8458b2f0dc661d89b6d7149e09066897f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977512035279064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63762a34480f9
0aab908464a95fb4a2d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,PodSandboxId:1c5eea2f27aa40f6ba9e2f627a3bae9cc96a6f789fd720bf07af02e508fe7323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977511813975185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e4f62415f16dde270e802
807238601,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,PodSandboxId:4073e91be8f5a881049f4ed66d6a4e52ee84b1a1b84b6599aaf2245e6d7eb6d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977511687168501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26039813275e3110d741b46c8b90541,
},Annotations:map[string]string{io.kubernetes.container.hash: 996cc199,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=424b80fb-6e66-4e50-a9c9-cee0b50396cf name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.916024226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=10557591-0230-4a7a-a1ed-71ba993db3a6 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.916089670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=10557591-0230-4a7a-a1ed-71ba993db3a6 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.917515079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d0fd8370-9b74-4441-adac-2bd0f4cb4647 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.917903250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978080917887671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d0fd8370-9b74-4441-adac-2bd0f4cb4647 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.918374981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9fe5eb0e-1411-4866-912e-bdceb075e3a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.918516652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9fe5eb0e-1411-4866-912e-bdceb075e3a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.918752969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992,PodSandboxId:a16a96152bc358a8c3fec8c6a96b5163e72e4b918e378bbf5334c6d87f6453ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977536643581968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3168b6ac-f288-4e1d-a4ce-78c4198debba,},Annotations:map[string]string{io.kubernetes.container.hash: 2276adff,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606,PodSandboxId:0cb501837f5b71df2a529b7e7f5653a541722785d0cad99aa8521ed5557f387d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977536201739520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdppd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddcb6130-1e2c-49b0-99de-b6b7d576d82c,},Annotations:map[string]string{io.kubernetes.container.hash: 965ba9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177,PodSandboxId:41e9a1ff99376bd5e3726daf30c53e821458b7b42570ce639fdedb3141cfae75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977535628469561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-42nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88175e14-09c2-4dc2-a56a-fa3bf71ae420,},Annotations:map[string]string{io.kubernetes.container.hash: fc333b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558,PodSandboxId:59f0ab2a002c1248a494bcd77c1280dc59b87b7cc8e4e8032acb7985faca402d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977512104150320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092ea65709ebacc65acf1f06e0b9e365,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66ab31e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2,PodSandboxId:64de30fc95549f64f97ef869e43fd4a8458b2f0dc661d89b6d7149e09066897f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977512035279064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63762a34480f9
0aab908464a95fb4a2d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,PodSandboxId:1c5eea2f27aa40f6ba9e2f627a3bae9cc96a6f789fd720bf07af02e508fe7323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977511813975185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e4f62415f16dde270e802
807238601,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,PodSandboxId:4073e91be8f5a881049f4ed66d6a4e52ee84b1a1b84b6599aaf2245e6d7eb6d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977511687168501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26039813275e3110d741b46c8b90541,
},Annotations:map[string]string{io.kubernetes.container.hash: 996cc199,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9fe5eb0e-1411-4866-912e-bdceb075e3a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.954111902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f9c12d20-36b8-41a8-95fb-f6f0bbc158b6 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.954194400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f9c12d20-36b8-41a8-95fb-f6f0bbc158b6 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.955765954Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0bcecc71-6c7d-4c58-8719-22c362f5e355 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.956202768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978080956183655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0bcecc71-6c7d-4c58-8719-22c362f5e355 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.956702127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=42debe1f-ee6a-4f01-b800-53678eedcbfd name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.956776645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=42debe1f-ee6a-4f01-b800-53678eedcbfd name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:00 embed-certs-279880 crio[707]: time="2023-11-14 16:08:00.956931646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992,PodSandboxId:a16a96152bc358a8c3fec8c6a96b5163e72e4b918e378bbf5334c6d87f6453ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977536643581968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3168b6ac-f288-4e1d-a4ce-78c4198debba,},Annotations:map[string]string{io.kubernetes.container.hash: 2276adff,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606,PodSandboxId:0cb501837f5b71df2a529b7e7f5653a541722785d0cad99aa8521ed5557f387d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977536201739520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdppd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddcb6130-1e2c-49b0-99de-b6b7d576d82c,},Annotations:map[string]string{io.kubernetes.container.hash: 965ba9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177,PodSandboxId:41e9a1ff99376bd5e3726daf30c53e821458b7b42570ce639fdedb3141cfae75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977535628469561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-42nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88175e14-09c2-4dc2-a56a-fa3bf71ae420,},Annotations:map[string]string{io.kubernetes.container.hash: fc333b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558,PodSandboxId:59f0ab2a002c1248a494bcd77c1280dc59b87b7cc8e4e8032acb7985faca402d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977512104150320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092ea65709ebacc65acf1f06e0b9e365,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66ab31e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2,PodSandboxId:64de30fc95549f64f97ef869e43fd4a8458b2f0dc661d89b6d7149e09066897f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977512035279064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63762a34480f9
0aab908464a95fb4a2d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,PodSandboxId:1c5eea2f27aa40f6ba9e2f627a3bae9cc96a6f789fd720bf07af02e508fe7323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977511813975185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e4f62415f16dde270e802
807238601,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,PodSandboxId:4073e91be8f5a881049f4ed66d6a4e52ee84b1a1b84b6599aaf2245e6d7eb6d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977511687168501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26039813275e3110d741b46c8b90541,
},Annotations:map[string]string{io.kubernetes.container.hash: 996cc199,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=42debe1f-ee6a-4f01-b800-53678eedcbfd name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fe9f08afebe6e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a16a96152bc35       storage-provisioner
	9cae5d2c2a9eb       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   0cb501837f5b7       kube-proxy-qdppd
	9257697dbd32b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   41e9a1ff99376       coredns-5dd5756b68-42nzn
	b28ed4dcfc30b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   59f0ab2a002c1       etcd-embed-certs-279880
	605fd09539313       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   64de30fc95549       kube-controller-manager-embed-certs-279880
	7a97f16105c7a       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   1c5eea2f27aa4       kube-scheduler-embed-certs-279880
	12a4ab719e119       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   4073e91be8f5a       kube-apiserver-embed-certs-279880
	
	* 
	* ==> coredns [9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33459 - 63391 "HINFO IN 2980470950394339585.3559220984865409200. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011629594s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-279880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-279880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=embed-certs-279880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_58_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:58:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-279880
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 16:07:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 16:04:06 +0000   Tue, 14 Nov 2023 15:58:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 16:04:06 +0000   Tue, 14 Nov 2023 15:58:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 16:04:06 +0000   Tue, 14 Nov 2023 15:58:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 16:04:06 +0000   Tue, 14 Nov 2023 15:58:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    embed-certs-279880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2367ca900cfb4b1c89db78f52091f224
	  System UUID:                2367ca90-0cfb-4b1c-89db-78f52091f224
	  Boot ID:                    6a108333-9860-4bde-910b-df6c310bed4c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-42nzn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-embed-certs-279880                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-279880             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-279880    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-qdppd                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-embed-certs-279880             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-g5wh5               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node embed-certs-279880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node embed-certs-279880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node embed-certs-279880 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node embed-certs-279880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node embed-certs-279880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node embed-certs-279880 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node embed-certs-279880 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s                  kubelet          Node embed-certs-279880 status is now: NodeReady
	  Normal  RegisteredNode           9m10s                  node-controller  Node embed-certs-279880 event: Registered Node embed-certs-279880 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov14 15:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068483] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.312848] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.213967] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.137969] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.444257] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.799256] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.113679] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.150432] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.118438] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.227506] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[ +17.218420] systemd-fstab-generator[906]: Ignoring "noauto" for root device
	[Nov14 15:54] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.048681] hrtimer: interrupt took 7368940 ns
	[Nov14 15:58] systemd-fstab-generator[3477]: Ignoring "noauto" for root device
	[  +9.837185] systemd-fstab-generator[3805]: Ignoring "noauto" for root device
	[ +12.876350] kauditd_printk_skb: 2 callbacks suppressed
	[Nov14 15:59] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558] <==
	* {"level":"info","ts":"2023-11-14T15:58:34.237989Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T15:58:34.238014Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T15:58:34.245158Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-14T15:58:34.245512Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c194f0f1585e7a7d","initial-advertise-peer-urls":["https://192.168.39.147:2380"],"listen-peer-urls":["https://192.168.39.147:2380"],"advertise-client-urls":["https://192.168.39.147:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.147:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-14T15:58:34.245608Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-14T15:58:34.245703Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2023-11-14T15:58:34.24575Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2023-11-14T15:58:34.917372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-14T15:58:34.917525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-14T15:58:34.917544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgPreVoteResp from c194f0f1585e7a7d at term 1"}
	{"level":"info","ts":"2023-11-14T15:58:34.917557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:58:34.917564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgVoteResp from c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2023-11-14T15:58:34.917573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became leader at term 2"}
	{"level":"info","ts":"2023-11-14T15:58:34.917581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c194f0f1585e7a7d elected leader c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2023-11-14T15:58:34.91921Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c194f0f1585e7a7d","local-member-attributes":"{Name:embed-certs-279880 ClientURLs:[https://192.168.39.147:2379]}","request-path":"/0/members/c194f0f1585e7a7d/attributes","cluster-id":"582b8c8375119e1d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:58:34.919542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:58:34.920468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:58:34.920953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.147:2379"}
	{"level":"info","ts":"2023-11-14T15:58:34.921092Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:58:34.921405Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:58:34.922327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:58:34.925737Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:58:34.922943Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:58:34.925863Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:58:34.925915Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  16:08:01 up 14 min,  0 users,  load average: 0.91, 0.74, 0.44
	Linux embed-certs-279880 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b] <==
	* W1114 16:03:37.583144       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:03:37.583249       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:03:37.583260       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:03:37.583359       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:03:37.583525       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:03:37.584738       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:04:36.444165       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:04:37.584247       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:04:37.584328       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:04:37.584340       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:04:37.585399       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:04:37.585542       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:04:37.585594       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:05:36.443842       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 16:06:36.443964       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:06:37.584965       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:06:37.585042       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:06:37.585060       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:06:37.586526       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:06:37.586664       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:06:37.586726       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:07:36.443188       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2] <==
	* I1114 16:02:22.252001       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:02:51.762537       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:02:52.262086       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:03:21.768199       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:03:22.270598       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:03:51.774532       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:03:52.284749       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:04:21.783374       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:04:22.296085       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:04:45.296860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.565079ms"
	E1114 16:04:51.789515       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:04:52.306880       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:04:57.292249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="71.444µs"
	E1114 16:05:21.795288       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:05:22.316756       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:05:51.803037       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:05:52.326783       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:06:21.811932       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:06:22.336208       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:06:51.817616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:06:52.345524       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:07:21.823979       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:07:22.354644       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:07:51.829729       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:07:52.363958       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606] <==
	* I1114 15:58:56.930050       1 server_others.go:69] "Using iptables proxy"
	I1114 15:58:56.946869       1 node.go:141] Successfully retrieved node IP: 192.168.39.147
	I1114 15:58:57.002578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 15:58:57.002622       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 15:58:57.005255       1 server_others.go:152] "Using iptables Proxier"
	I1114 15:58:57.005579       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 15:58:57.005995       1 server.go:846] "Version info" version="v1.28.3"
	I1114 15:58:57.006185       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:58:57.008139       1 config.go:188] "Starting service config controller"
	I1114 15:58:57.008756       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 15:58:57.008829       1 config.go:97] "Starting endpoint slice config controller"
	I1114 15:58:57.008838       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 15:58:57.010855       1 config.go:315] "Starting node config controller"
	I1114 15:58:57.010896       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 15:58:57.109914       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 15:58:57.110024       1 shared_informer.go:318] Caches are synced for service config
	I1114 15:58:57.111633       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df] <==
	* E1114 15:58:36.698498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1114 15:58:36.698536       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:36.698544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 15:58:36.698553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 15:58:36.698561       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 15:58:36.698073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:36.698782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:36.698154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1114 15:58:37.563704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:37.563757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 15:58:37.602978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 15:58:37.603119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1114 15:58:37.642019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 15:58:37.642115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 15:58:37.769824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:37.769903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 15:58:37.775526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 15:58:37.775592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1114 15:58:37.787183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 15:58:37.787249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 15:58:37.790755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 15:58:37.790822       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1114 15:58:37.995874       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 15:58:37.995958       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1114 15:58:41.155064       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:53:28 UTC, ends at Tue 2023-11-14 16:08:01 UTC. --
	Nov 14 16:05:11 embed-certs-279880 kubelet[3812]: E1114 16:05:11.275326    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:05:24 embed-certs-279880 kubelet[3812]: E1114 16:05:24.276379    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:05:38 embed-certs-279880 kubelet[3812]: E1114 16:05:38.278242    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:05:40 embed-certs-279880 kubelet[3812]: E1114 16:05:40.303002    3812 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:05:40 embed-certs-279880 kubelet[3812]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:05:40 embed-certs-279880 kubelet[3812]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:05:40 embed-certs-279880 kubelet[3812]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:05:52 embed-certs-279880 kubelet[3812]: E1114 16:05:52.276879    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:06:07 embed-certs-279880 kubelet[3812]: E1114 16:06:07.278986    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:06:18 embed-certs-279880 kubelet[3812]: E1114 16:06:18.288045    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:06:31 embed-certs-279880 kubelet[3812]: E1114 16:06:31.275625    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:06:40 embed-certs-279880 kubelet[3812]: E1114 16:06:40.303519    3812 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:06:40 embed-certs-279880 kubelet[3812]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:06:40 embed-certs-279880 kubelet[3812]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:06:40 embed-certs-279880 kubelet[3812]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:06:45 embed-certs-279880 kubelet[3812]: E1114 16:06:45.275581    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:06:57 embed-certs-279880 kubelet[3812]: E1114 16:06:57.275772    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:07:08 embed-certs-279880 kubelet[3812]: E1114 16:07:08.276054    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:07:21 embed-certs-279880 kubelet[3812]: E1114 16:07:21.275617    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:07:36 embed-certs-279880 kubelet[3812]: E1114 16:07:36.276874    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:07:40 embed-certs-279880 kubelet[3812]: E1114 16:07:40.305974    3812 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:07:40 embed-certs-279880 kubelet[3812]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:07:40 embed-certs-279880 kubelet[3812]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:07:40 embed-certs-279880 kubelet[3812]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:07:50 embed-certs-279880 kubelet[3812]: E1114 16:07:50.277162    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	
	* 
	* ==> storage-provisioner [fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992] <==
	* I1114 15:58:56.822925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 15:58:56.850562       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 15:58:56.850690       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 15:58:56.879526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 15:58:56.881128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-279880_a5f186f8-8d31-4b40-8055-1e958bef9301!
	I1114 15:58:56.882738       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5b2292e6-be29-4fb5-a8ce-24e3188549d9", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-279880_a5f186f8-8d31-4b40-8055-1e958bef9301 became leader
	I1114 15:58:56.981926       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-279880_a5f186f8-8d31-4b40-8055-1e958bef9301!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-279880 -n embed-certs-279880
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-279880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-g5wh5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-279880 describe pod metrics-server-57f55c9bc5-g5wh5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-279880 describe pod metrics-server-57f55c9bc5-g5wh5: exit status 1 (72.022681ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-g5wh5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-279880 describe pod metrics-server-57f55c9bc5-g5wh5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1114 15:59:39.652870  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-14 16:08:15.407350373 +0000 UTC m=+5362.937535337
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-529430 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-529430 logs -n 25: (1.661287319s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-331502 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | disable-driver-mounts-331502                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:47 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-490998             | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-279880            | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-842105        | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-529430  | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC | 14 Nov 23 15:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC |                     |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-490998                  | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-279880                 | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 15:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-842105             | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-529430       | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 15:59 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:49:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:49:49.997953  876668 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:49:49.998137  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998147  876668 out.go:309] Setting ErrFile to fd 2...
	I1114 15:49:49.998152  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998369  876668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:49:49.998978  876668 out.go:303] Setting JSON to false
	I1114 15:49:50.000072  876668 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":45142,"bootTime":1699931848,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:49:50.000141  876668 start.go:138] virtualization: kvm guest
	I1114 15:49:50.002690  876668 out.go:177] * [default-k8s-diff-port-529430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:49:50.004392  876668 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:49:50.004396  876668 notify.go:220] Checking for updates...
	I1114 15:49:50.006193  876668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:49:50.007844  876668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:49:50.009232  876668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:49:50.010572  876668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:49:50.011857  876668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:49:50.013604  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:49:50.014059  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.014149  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.028903  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I1114 15:49:50.029290  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.029869  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.029892  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.030244  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.030474  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.030753  876668 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:49:50.031049  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.031096  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.045696  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I1114 15:49:50.046117  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.046625  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.046658  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.047069  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.047303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.082731  876668 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:49:50.084362  876668 start.go:298] selected driver: kvm2
	I1114 15:49:50.084384  876668 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.084517  876668 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:49:50.085533  876668 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.085625  876668 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:49:50.100834  876668 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:49:50.101226  876668 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:49:50.101308  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:49:50.101328  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:49:50.101342  876668 start_flags.go:323] config:
	{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-52943
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.101540  876668 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.103413  876668 out.go:177] * Starting control plane node default-k8s-diff-port-529430 in cluster default-k8s-diff-port-529430
	I1114 15:49:49.196989  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:52.269051  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:50.104763  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:49:50.104815  876668 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:49:50.104835  876668 cache.go:56] Caching tarball of preloaded images
	I1114 15:49:50.104932  876668 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:49:50.104946  876668 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:49:50.105089  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:49:50.105307  876668 start.go:365] acquiring machines lock for default-k8s-diff-port-529430: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:49:58.349061  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:01.421017  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:07.501030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:10.573058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:16.653093  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:19.725040  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:25.805047  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:28.877039  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:34.957084  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:38.029008  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:44.109068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:47.181018  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:53.261065  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:56.333048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:02.413048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:05.485063  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:11.565034  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:14.636996  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:20.717050  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:23.789097  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:29.869058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:32.941066  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:39.021029  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:42.093064  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:48.173074  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:51.245007  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:57.325014  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:00.397111  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:06.477052  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:09.549016  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:15.629105  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:18.701000  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:24.781004  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:27.853046  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:33.933030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:37.005067  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:43.085068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:46.157044  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:52.237056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:55.309080  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:01.389056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:04.461005  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:10.541083  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:13.613033  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:16.617368  876220 start.go:369] acquired machines lock for "embed-certs-279880" in 4m25.691009916s
	I1114 15:53:16.617492  876220 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:16.617500  876220 fix.go:54] fixHost starting: 
	I1114 15:53:16.617993  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:16.618029  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:16.633223  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1114 15:53:16.633787  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:16.634385  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:53:16.634412  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:16.634743  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:16.634958  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:16.635120  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:53:16.636933  876220 fix.go:102] recreateIfNeeded on embed-certs-279880: state=Stopped err=<nil>
	I1114 15:53:16.636967  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	W1114 15:53:16.637164  876220 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:16.638727  876220 out.go:177] * Restarting existing kvm2 VM for "embed-certs-279880" ...
	I1114 15:53:16.615062  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:16.615116  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:53:16.617147  876065 machine.go:91] provisioned docker machine in 4m37.399136623s
	I1114 15:53:16.617196  876065 fix.go:56] fixHost completed within 4m37.422027817s
	I1114 15:53:16.617203  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 4m37.422123699s
	W1114 15:53:16.617282  876065 start.go:691] error starting host: provision: host is not running
	W1114 15:53:16.617491  876065 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1114 15:53:16.617502  876065 start.go:706] Will try again in 5 seconds ...
	I1114 15:53:16.640137  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Start
	I1114 15:53:16.640330  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring networks are active...
	I1114 15:53:16.641029  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network default is active
	I1114 15:53:16.641386  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network mk-embed-certs-279880 is active
	I1114 15:53:16.641738  876220 main.go:141] libmachine: (embed-certs-279880) Getting domain xml...
	I1114 15:53:16.642488  876220 main.go:141] libmachine: (embed-certs-279880) Creating domain...
	I1114 15:53:17.858298  876220 main.go:141] libmachine: (embed-certs-279880) Waiting to get IP...
	I1114 15:53:17.859506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:17.859912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:17.860039  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:17.859881  877182 retry.go:31] will retry after 225.269159ms: waiting for machine to come up
	I1114 15:53:18.086611  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.087099  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.087135  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.087062  877182 retry.go:31] will retry after 322.840106ms: waiting for machine to come up
	I1114 15:53:18.411781  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.412238  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.412278  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.412127  877182 retry.go:31] will retry after 459.77881ms: waiting for machine to come up
	I1114 15:53:18.873994  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.874393  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.874433  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.874341  877182 retry.go:31] will retry after 460.123636ms: waiting for machine to come up
	I1114 15:53:19.335916  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.336488  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.336520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.336414  877182 retry.go:31] will retry after 526.141665ms: waiting for machine to come up
	I1114 15:53:19.864336  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.864830  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.864856  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.864766  877182 retry.go:31] will retry after 817.261813ms: waiting for machine to come up
	I1114 15:53:20.683806  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:20.684289  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:20.684309  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:20.684244  877182 retry.go:31] will retry after 1.026381849s: waiting for machine to come up
	I1114 15:53:21.619196  876065 start.go:365] acquiring machines lock for no-preload-490998: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:53:21.712760  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:21.713237  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:21.713263  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:21.713201  877182 retry.go:31] will retry after 1.088672482s: waiting for machine to come up
	I1114 15:53:22.803222  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:22.803698  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:22.803734  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:22.803639  877182 retry.go:31] will retry after 1.394534659s: waiting for machine to come up
	I1114 15:53:24.199372  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:24.199764  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:24.199794  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:24.199706  877182 retry.go:31] will retry after 1.511094366s: waiting for machine to come up
	I1114 15:53:25.713650  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:25.714062  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:25.714107  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:25.713980  877182 retry.go:31] will retry after 1.821074261s: waiting for machine to come up
	I1114 15:53:27.536875  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:27.537423  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:27.537458  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:27.537349  877182 retry.go:31] will retry after 2.856840662s: waiting for machine to come up
	I1114 15:53:30.395562  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:30.396059  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:30.396086  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:30.396007  877182 retry.go:31] will retry after 4.003431067s: waiting for machine to come up
	I1114 15:53:35.689894  876396 start.go:369] acquired machines lock for "old-k8s-version-842105" in 4m23.221865246s
	I1114 15:53:35.689964  876396 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:35.689973  876396 fix.go:54] fixHost starting: 
	I1114 15:53:35.690410  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:35.690446  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:35.709418  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I1114 15:53:35.709816  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:35.710366  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:53:35.710400  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:35.710760  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:35.710946  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:35.711101  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:53:35.712666  876396 fix.go:102] recreateIfNeeded on old-k8s-version-842105: state=Stopped err=<nil>
	I1114 15:53:35.712696  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	W1114 15:53:35.712882  876396 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:35.715357  876396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-842105" ...
	I1114 15:53:34.403163  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.403706  876220 main.go:141] libmachine: (embed-certs-279880) Found IP for machine: 192.168.39.147
	I1114 15:53:34.403737  876220 main.go:141] libmachine: (embed-certs-279880) Reserving static IP address...
	I1114 15:53:34.403757  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has current primary IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.404290  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.404318  876220 main.go:141] libmachine: (embed-certs-279880) DBG | skip adding static IP to network mk-embed-certs-279880 - found existing host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"}
	I1114 15:53:34.404331  876220 main.go:141] libmachine: (embed-certs-279880) Reserved static IP address: 192.168.39.147
	I1114 15:53:34.404343  876220 main.go:141] libmachine: (embed-certs-279880) Waiting for SSH to be available...
	I1114 15:53:34.404351  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Getting to WaitForSSH function...
	I1114 15:53:34.406833  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407219  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.407248  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407367  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH client type: external
	I1114 15:53:34.407412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa (-rw-------)
	I1114 15:53:34.407481  876220 main.go:141] libmachine: (embed-certs-279880) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:34.407525  876220 main.go:141] libmachine: (embed-certs-279880) DBG | About to run SSH command:
	I1114 15:53:34.407551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | exit 0
	I1114 15:53:34.504225  876220 main.go:141] libmachine: (embed-certs-279880) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:34.504696  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetConfigRaw
	I1114 15:53:34.505414  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.508202  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.508632  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.508685  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.509034  876220 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/config.json ...
	I1114 15:53:34.509282  876220 machine.go:88] provisioning docker machine ...
	I1114 15:53:34.509309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:34.509521  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509730  876220 buildroot.go:166] provisioning hostname "embed-certs-279880"
	I1114 15:53:34.509758  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509894  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.511987  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512285  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.512317  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512472  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.512629  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512751  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512925  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.513118  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.513578  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.513594  876220 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-279880 && echo "embed-certs-279880" | sudo tee /etc/hostname
	I1114 15:53:34.664546  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-279880
	
	I1114 15:53:34.664595  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.667537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.667908  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.667941  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.668142  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.668388  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668631  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668910  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.669142  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.669652  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.669684  876220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-279880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-279880/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-279880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:34.810710  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:34.810745  876220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:34.810768  876220 buildroot.go:174] setting up certificates
	I1114 15:53:34.810780  876220 provision.go:83] configureAuth start
	I1114 15:53:34.810788  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.811138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.814056  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.814537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814747  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.817131  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817513  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.817544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817675  876220 provision.go:138] copyHostCerts
	I1114 15:53:34.817774  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:34.817789  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:34.817869  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:34.817990  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:34.818006  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:34.818039  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:34.818117  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:34.818129  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:34.818161  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:34.818226  876220 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.embed-certs-279880 san=[192.168.39.147 192.168.39.147 localhost 127.0.0.1 minikube embed-certs-279880]
	I1114 15:53:34.925955  876220 provision.go:172] copyRemoteCerts
	I1114 15:53:34.926014  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:34.926039  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.928954  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929322  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.929346  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929520  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.929703  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.929866  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.930033  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.026199  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:35.049682  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1114 15:53:35.072415  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:53:35.097200  876220 provision.go:86] duration metric: configureAuth took 286.405404ms
	I1114 15:53:35.097226  876220 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:35.097425  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:53:35.097558  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.100561  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.100912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.100965  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.101091  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.101296  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101500  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101641  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.101795  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.102165  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.102198  876220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:35.411682  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:35.411719  876220 machine.go:91] provisioned docker machine in 902.419916ms
	I1114 15:53:35.411733  876220 start.go:300] post-start starting for "embed-certs-279880" (driver="kvm2")
	I1114 15:53:35.411748  876220 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:35.411770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.412161  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:35.412201  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.415071  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.415551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.415849  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.416000  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.416143  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.512565  876220 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:35.517087  876220 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:35.517146  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:35.517235  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:35.517356  876220 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:35.517511  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:35.527797  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:35.552798  876220 start.go:303] post-start completed in 141.045785ms
	I1114 15:53:35.552827  876220 fix.go:56] fixHost completed within 18.935326604s
	I1114 15:53:35.552855  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.555540  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.555930  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.555970  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.556155  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.556390  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556573  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.557007  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.557338  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.557348  876220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:35.689729  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977215.639237319
	
	I1114 15:53:35.689759  876220 fix.go:206] guest clock: 1699977215.639237319
	I1114 15:53:35.689769  876220 fix.go:219] Guest: 2023-11-14 15:53:35.639237319 +0000 UTC Remote: 2023-11-14 15:53:35.552830911 +0000 UTC m=+284.779127994 (delta=86.406408ms)
	I1114 15:53:35.689801  876220 fix.go:190] guest clock delta is within tolerance: 86.406408ms
	I1114 15:53:35.689807  876220 start.go:83] releasing machines lock for "embed-certs-279880", held for 19.072338997s
	I1114 15:53:35.689842  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.690197  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:35.692786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693260  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.693311  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693440  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694011  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694222  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694338  876220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:35.694404  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.694455  876220 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:35.694484  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.697198  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697220  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697702  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697732  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697771  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697865  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698085  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698088  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698297  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698303  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698438  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698562  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.698974  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.813318  876220 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:35.819124  876220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:35.957038  876220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:35.964876  876220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:35.964984  876220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:35.980758  876220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:35.980780  876220 start.go:472] detecting cgroup driver to use...
	I1114 15:53:35.980848  876220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:35.993968  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:36.006564  876220 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:36.006626  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:36.021314  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:36.035842  876220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:36.147617  876220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:36.268025  876220 docker.go:219] disabling docker service ...
	I1114 15:53:36.268113  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:36.280847  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:36.292659  876220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:36.414923  876220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:36.534481  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:36.547652  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:36.565229  876220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:53:36.565312  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.574949  876220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:36.575030  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.585105  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.594790  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.603613  876220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:36.613116  876220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:36.620828  876220 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:36.620884  876220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:36.632600  876220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:36.642150  876220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:36.756773  876220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:36.929381  876220 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:36.929467  876220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:36.934735  876220 start.go:540] Will wait 60s for crictl version
	I1114 15:53:36.934790  876220 ssh_runner.go:195] Run: which crictl
	I1114 15:53:36.940182  876220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:36.991630  876220 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:36.991718  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.045160  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.097281  876220 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:53:35.716835  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Start
	I1114 15:53:35.716987  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring networks are active...
	I1114 15:53:35.717715  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network default is active
	I1114 15:53:35.718030  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network mk-old-k8s-version-842105 is active
	I1114 15:53:35.718429  876396 main.go:141] libmachine: (old-k8s-version-842105) Getting domain xml...
	I1114 15:53:35.719055  876396 main.go:141] libmachine: (old-k8s-version-842105) Creating domain...
	I1114 15:53:36.991860  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting to get IP...
	I1114 15:53:36.992911  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:36.993376  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:36.993427  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:36.993318  877295 retry.go:31] will retry after 227.553321ms: waiting for machine to come up
	I1114 15:53:37.223023  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.223561  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.223629  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.223511  877295 retry.go:31] will retry after 308.951372ms: waiting for machine to come up
	I1114 15:53:37.098693  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:37.102205  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102676  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:37.102710  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102955  876220 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:37.107113  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:37.120009  876220 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:53:37.120075  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:37.160178  876220 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:53:37.160241  876220 ssh_runner.go:195] Run: which lz4
	I1114 15:53:37.164351  876220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:53:37.168645  876220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:53:37.168684  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:53:39.026796  876220 crio.go:444] Took 1.862508 seconds to copy over tarball
	I1114 15:53:39.026876  876220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:53:37.534243  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.534797  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.534827  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.534774  877295 retry.go:31] will retry after 440.76682ms: waiting for machine to come up
	I1114 15:53:37.977712  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.978257  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.978287  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.978207  877295 retry.go:31] will retry after 402.601155ms: waiting for machine to come up
	I1114 15:53:38.383001  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.383515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.383551  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.383468  877295 retry.go:31] will retry after 580.977501ms: waiting for machine to come up
	I1114 15:53:38.966457  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.967088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.967121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.967026  877295 retry.go:31] will retry after 679.65563ms: waiting for machine to come up
	I1114 15:53:39.648086  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:39.648560  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:39.648593  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:39.648501  877295 retry.go:31] will retry after 1.014858956s: waiting for machine to come up
	I1114 15:53:40.664728  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:40.665285  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:40.665321  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:40.665230  877295 retry.go:31] will retry after 1.035036164s: waiting for machine to come up
	I1114 15:53:41.701639  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:41.702088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:41.702123  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:41.702029  877295 retry.go:31] will retry after 1.15711647s: waiting for machine to come up
	I1114 15:53:41.885259  876220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.858355323s)
	I1114 15:53:41.885288  876220 crio.go:451] Took 2.858463 seconds to extract the tarball
	I1114 15:53:41.885300  876220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:53:41.926498  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:41.972943  876220 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:53:41.972980  876220 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:53:41.973053  876220 ssh_runner.go:195] Run: crio config
	I1114 15:53:42.038006  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:53:42.038032  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:53:42.038053  876220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:53:42.038071  876220 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-279880 NodeName:embed-certs-279880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:53:42.038234  876220 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-279880"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:53:42.038323  876220 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-279880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:53:42.038394  876220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:53:42.050379  876220 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:53:42.050462  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:53:42.058921  876220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1114 15:53:42.074304  876220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:53:42.090403  876220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1114 15:53:42.106412  876220 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I1114 15:53:42.109907  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:42.122915  876220 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880 for IP: 192.168.39.147
	I1114 15:53:42.122945  876220 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:53:42.123106  876220 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:53:42.123148  876220 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:53:42.123226  876220 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/client.key
	I1114 15:53:42.123290  876220 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key.a88b087d
	I1114 15:53:42.123322  876220 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key
	I1114 15:53:42.123430  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:53:42.123456  876220 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:53:42.123467  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:53:42.123486  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:53:42.123517  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:53:42.123541  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:53:42.123584  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:42.124261  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:53:42.149787  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:53:42.177563  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:53:42.203326  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:53:42.228832  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:53:42.254674  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:53:42.280548  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:53:42.305318  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:53:42.331461  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:53:42.356555  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:53:42.382688  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:53:42.407945  876220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:53:42.424902  876220 ssh_runner.go:195] Run: openssl version
	I1114 15:53:42.430411  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:53:42.443033  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448429  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448496  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.455631  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:53:42.466421  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:53:42.476013  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480381  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480434  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.486048  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:53:42.495375  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:53:42.505366  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509762  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509804  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.515519  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:53:42.524838  876220 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:53:42.528912  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:53:42.534641  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:53:42.540138  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:53:42.545849  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:53:42.551518  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:53:42.559001  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:53:42.566135  876220 kubeadm.go:404] StartCluster: {Name:embed-certs-279880 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:53:42.566241  876220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:53:42.566297  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:42.613075  876220 cri.go:89] found id: ""
	I1114 15:53:42.613158  876220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:53:42.622675  876220 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:53:42.622696  876220 kubeadm.go:636] restartCluster start
	I1114 15:53:42.622785  876220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:53:42.631529  876220 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.633202  876220 kubeconfig.go:92] found "embed-certs-279880" server: "https://192.168.39.147:8443"
	I1114 15:53:42.636588  876220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:53:42.645531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.645578  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.656733  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.656764  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.656807  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.667524  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.168290  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.168372  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.181051  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.668650  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.668772  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.681727  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.168359  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.168462  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.182012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.668666  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.668763  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.680872  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.168505  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.168625  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.180321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.667875  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.668016  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.680318  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.861352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:42.861900  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:42.861963  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:42.861836  877295 retry.go:31] will retry after 2.117184279s: waiting for machine to come up
	I1114 15:53:44.982059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:44.982506  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:44.982538  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:44.982449  877295 retry.go:31] will retry after 2.3999215s: waiting for machine to come up
	I1114 15:53:46.168271  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.168410  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.180809  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:46.667886  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.668009  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.679468  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.168072  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.168204  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.180268  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.667786  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.667948  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.678927  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.168531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.168660  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.180004  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.668597  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.668752  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.680945  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.168543  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.168635  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.180012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.668382  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.668486  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.683691  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.168265  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.168353  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.179169  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.667618  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.667730  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.678707  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.384177  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:47.384695  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:47.384734  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:47.384649  877295 retry.go:31] will retry after 2.820309413s: waiting for machine to come up
	I1114 15:53:50.208736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:50.209188  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:50.209221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:50.209130  877295 retry.go:31] will retry after 2.822648093s: waiting for machine to come up
	I1114 15:53:51.168046  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.168144  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.179168  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:51.668301  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.668407  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.680321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.167971  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:52.168062  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:52.179159  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.645656  876220 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:53:52.645688  876220 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:53:52.645702  876220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:53:52.645806  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:52.682368  876220 cri.go:89] found id: ""
	I1114 15:53:52.682482  876220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:53:52.697186  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:53:52.705449  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:53:52.705503  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714019  876220 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714054  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:52.831334  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.796131  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.984427  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.060195  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.137132  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:53:54.137217  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.155040  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.676264  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.176129  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.676614  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:53.034614  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:53.035044  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:53.035078  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:53.034993  877295 retry.go:31] will retry after 4.160398149s: waiting for machine to come up
	I1114 15:53:57.196776  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197211  876396 main.go:141] libmachine: (old-k8s-version-842105) Found IP for machine: 192.168.72.151
	I1114 15:53:57.197240  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserving static IP address...
	I1114 15:53:57.197260  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has current primary IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197667  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.197700  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserved static IP address: 192.168.72.151
	I1114 15:53:57.197724  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | skip adding static IP to network mk-old-k8s-version-842105 - found existing host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"}
	I1114 15:53:57.197742  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting for SSH to be available...
	I1114 15:53:57.197754  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Getting to WaitForSSH function...
	I1114 15:53:57.200279  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200646  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.200681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200916  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH client type: external
	I1114 15:53:57.200948  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa (-rw-------)
	I1114 15:53:57.200983  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:57.200999  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | About to run SSH command:
	I1114 15:53:57.201015  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | exit 0
	I1114 15:53:57.288554  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:57.288904  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetConfigRaw
	I1114 15:53:57.289691  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.292087  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292445  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.292501  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292720  876396 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/config.json ...
	I1114 15:53:57.292930  876396 machine.go:88] provisioning docker machine ...
	I1114 15:53:57.292950  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:57.293164  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293326  876396 buildroot.go:166] provisioning hostname "old-k8s-version-842105"
	I1114 15:53:57.293352  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293472  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.295765  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296139  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.296170  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296299  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.296470  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296625  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296768  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.296945  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.297524  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.297546  876396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-842105 && echo "old-k8s-version-842105" | sudo tee /etc/hostname
	I1114 15:53:58.537304  876668 start.go:369] acquired machines lock for "default-k8s-diff-port-529430" in 4m8.43196122s
	I1114 15:53:58.537380  876668 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:58.537392  876668 fix.go:54] fixHost starting: 
	I1114 15:53:58.537828  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:58.537865  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:58.555361  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I1114 15:53:58.555809  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:58.556346  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:53:58.556379  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:58.556762  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:58.556993  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:53:58.557144  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:53:58.558707  876668 fix.go:102] recreateIfNeeded on default-k8s-diff-port-529430: state=Stopped err=<nil>
	I1114 15:53:58.558736  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	W1114 15:53:58.558888  876668 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:58.561175  876668 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-529430" ...
	I1114 15:53:57.423888  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-842105
	
	I1114 15:53:57.423971  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.427115  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427421  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.427459  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427660  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.427882  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428135  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428351  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.428584  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.429089  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.429124  876396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-842105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-842105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-842105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:57.554847  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:57.554893  876396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:57.554957  876396 buildroot.go:174] setting up certificates
	I1114 15:53:57.554974  876396 provision.go:83] configureAuth start
	I1114 15:53:57.554989  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.555342  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.558305  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.558711  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558876  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.561568  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.561937  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.561973  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.562106  876396 provision.go:138] copyHostCerts
	I1114 15:53:57.562196  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:57.562218  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:57.562284  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:57.562402  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:57.562413  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:57.562445  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:57.562520  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:57.562532  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:57.562561  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:57.562631  876396 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-842105 san=[192.168.72.151 192.168.72.151 localhost 127.0.0.1 minikube old-k8s-version-842105]
	I1114 15:53:57.825621  876396 provision.go:172] copyRemoteCerts
	I1114 15:53:57.825706  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:57.825739  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.828352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828732  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.828778  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828924  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.829159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.829356  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.829505  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:57.913614  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:57.935960  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 15:53:57.957927  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:53:57.980061  876396 provision.go:86] duration metric: configureAuth took 425.071777ms
	I1114 15:53:57.980109  876396 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:57.980308  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:53:57.980405  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.983736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984128  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.984161  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984367  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.984574  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984927  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.985116  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.985478  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.985505  876396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:58.297063  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:58.297107  876396 machine.go:91] provisioned docker machine in 1.004160647s
	I1114 15:53:58.297121  876396 start.go:300] post-start starting for "old-k8s-version-842105" (driver="kvm2")
	I1114 15:53:58.297135  876396 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:58.297159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.297578  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:58.297624  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.300608  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301051  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.301081  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301312  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.301485  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.301655  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.301774  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.387785  876396 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:58.391947  876396 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:58.391974  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:58.392056  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:58.392177  876396 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:58.392301  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:58.401525  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:58.422853  876396 start.go:303] post-start completed in 125.713467ms
	I1114 15:53:58.422892  876396 fix.go:56] fixHost completed within 22.732917848s
	I1114 15:53:58.422922  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.425682  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.426098  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426282  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.426487  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426663  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426830  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.427040  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:58.427400  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:58.427416  876396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:58.537121  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977238.485050071
	
	I1114 15:53:58.537151  876396 fix.go:206] guest clock: 1699977238.485050071
	I1114 15:53:58.537161  876396 fix.go:219] Guest: 2023-11-14 15:53:58.485050071 +0000 UTC Remote: 2023-11-14 15:53:58.422897103 +0000 UTC m=+286.112017318 (delta=62.152968ms)
	I1114 15:53:58.537187  876396 fix.go:190] guest clock delta is within tolerance: 62.152968ms
	I1114 15:53:58.537206  876396 start.go:83] releasing machines lock for "old-k8s-version-842105", held for 22.847251095s
	I1114 15:53:58.537248  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.537548  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:58.540515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.540932  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.540974  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.541110  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541612  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541912  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.542012  876396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:58.542077  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.542171  876396 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:58.542202  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.544841  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545190  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545246  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545465  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.545666  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.545694  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545714  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545816  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.545905  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.546006  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.546067  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.546212  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.546365  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.626301  876396 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:58.651834  876396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:58.799770  876396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:58.806042  876396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:58.806134  876396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:58.824707  876396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:58.824752  876396 start.go:472] detecting cgroup driver to use...
	I1114 15:53:58.824824  876396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:58.840144  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:58.854846  876396 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:58.854905  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:58.869926  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:58.883517  876396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:58.990519  876396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:59.108637  876396 docker.go:219] disabling docker service ...
	I1114 15:53:59.108712  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:59.124681  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:59.138748  876396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:59.260422  876396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:59.364365  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:59.376773  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:59.394948  876396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1114 15:53:59.395027  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.404000  876396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:59.404074  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.412822  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.421316  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.429685  876396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:59.438818  876396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:59.446459  876396 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:59.446533  876396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:59.459160  876396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:59.467670  876396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:59.579125  876396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:59.794436  876396 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:59.794525  876396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:59.801013  876396 start.go:540] Will wait 60s for crictl version
	I1114 15:53:59.801095  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:53:59.805735  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:59.851270  876396 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:59.851383  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.898885  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.953911  876396 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1114 15:53:58.562788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Start
	I1114 15:53:58.562971  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring networks are active...
	I1114 15:53:58.563570  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network default is active
	I1114 15:53:58.564001  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network mk-default-k8s-diff-port-529430 is active
	I1114 15:53:58.564406  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Getting domain xml...
	I1114 15:53:58.565186  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Creating domain...
	I1114 15:53:59.907130  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting to get IP...
	I1114 15:53:59.908507  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.908991  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.909128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:53:59.908977  877437 retry.go:31] will retry after 306.122553ms: waiting for machine to come up
	I1114 15:53:56.176595  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.676568  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.699015  876220 api_server.go:72] duration metric: took 2.561885213s to wait for apiserver process to appear ...
	I1114 15:53:56.699041  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:53:56.699058  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:53:59.955466  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:59.959121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959545  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:59.959572  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959896  876396 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:59.965859  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:59.982494  876396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 15:53:59.982563  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:00.029401  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:00.029483  876396 ssh_runner.go:195] Run: which lz4
	I1114 15:54:00.034065  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:00.039738  876396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:00.039780  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1114 15:54:01.846049  876396 crio.go:444] Took 1.812024 seconds to copy over tarball
	I1114 15:54:01.846160  876396 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:01.387625  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.387668  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.387690  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.430505  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.430539  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.930801  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.937138  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:01.937169  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.431712  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.442719  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:02.442758  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.931021  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.938062  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:54:02.947420  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:02.947453  876220 api_server.go:131] duration metric: took 6.248404315s to wait for apiserver health ...
	I1114 15:54:02.947465  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:54:02.947479  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:02.949231  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:00.216693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217419  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217476  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.217346  877437 retry.go:31] will retry after 276.469735ms: waiting for machine to come up
	I1114 15:54:00.496200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496596  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496632  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.496550  877437 retry.go:31] will retry after 390.20616ms: waiting for machine to come up
	I1114 15:54:00.888367  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889341  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.889235  877437 retry.go:31] will retry after 551.896336ms: waiting for machine to come up
	I1114 15:54:01.443159  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443794  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443825  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:01.443756  877437 retry.go:31] will retry after 655.228992ms: waiting for machine to come up
	I1114 15:54:02.100194  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100681  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100716  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.100609  877437 retry.go:31] will retry after 896.817469ms: waiting for machine to come up
	I1114 15:54:02.999296  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999947  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999979  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.999897  877437 retry.go:31] will retry after 1.177419274s: waiting for machine to come up
	I1114 15:54:04.178783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179425  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179452  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:04.179351  877437 retry.go:31] will retry after 1.259348434s: waiting for machine to come up
	I1114 15:54:02.950643  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:02.986775  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:03.054339  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:03.074346  876220 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:03.074405  876220 system_pods.go:61] "coredns-5dd5756b68-gqxld" [0b846e58-0bbc-4770-94a4-8324753b36c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:03.074428  876220 system_pods.go:61] "etcd-embed-certs-279880" [e085e7a7-ec2e-4cf6-bbb2-d242a5e8d075] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:03.074442  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [4ffbfbaf-9978-4bb1-9e4e-ef23365f78fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:03.074455  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [d895906c-899f-41b3-9484-1a6985b978f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:03.074471  876220 system_pods.go:61] "kube-proxy-j2qnm" [feee8604-a749-4908-8361-42f63d55ec64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:03.074485  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [4325a0ba-9013-4899-b01b-befcb4cd5b72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:03.074504  876220 system_pods.go:61] "metrics-server-57f55c9bc5-gvtbw" [a7c44219-4b00-49c0-817f-68f9499f1ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:03.074531  876220 system_pods.go:61] "storage-provisioner" [f464123e-8329-4785-87ae-78ff30ac7d27] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:03.074547  876220 system_pods.go:74] duration metric: took 20.179327ms to wait for pod list to return data ...
	I1114 15:54:03.074558  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:03.078482  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:03.078526  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:03.078542  876220 node_conditions.go:105] duration metric: took 3.972732ms to run NodePressure ...
	I1114 15:54:03.078565  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:03.514232  876220 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521097  876220 kubeadm.go:787] kubelet initialised
	I1114 15:54:03.521125  876220 kubeadm.go:788] duration metric: took 6.859971ms waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521168  876220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:03.528777  876220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:05.249338  876396 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.403140591s)
	I1114 15:54:05.249383  876396 crio.go:451] Took 3.403300 seconds to extract the tarball
	I1114 15:54:05.249397  876396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:05.298779  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:05.351838  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:05.351873  876396 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:05.352034  876396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.352124  876396 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.352201  876396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.352219  876396 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.352067  876396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.352087  876396 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354089  876396 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1114 15:54:05.354101  876396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.354115  876396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.354117  876396 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354097  876396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.354178  876396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.354197  876396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.354270  876396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.512829  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.521658  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.529228  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1114 15:54:05.529451  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.529597  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.529802  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.534672  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.613591  876396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1114 15:54:05.613650  876396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.613721  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.644613  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.668090  876396 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1114 15:54:05.668167  876396 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.668231  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.685343  876396 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1114 15:54:05.685398  876396 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1114 15:54:05.685458  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725459  876396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1114 15:54:05.725508  876396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.725523  876396 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1114 15:54:05.725561  876396 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.725565  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725602  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727180  876396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1114 15:54:05.727215  876396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.727249  876396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1114 15:54:05.727283  876396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.727254  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727322  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.727325  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.849608  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.849657  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1114 15:54:05.849694  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.849747  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.849753  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.849830  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1114 15:54:05.849847  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.990379  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1114 15:54:05.990536  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1114 15:54:06.006943  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1114 15:54:06.006966  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1114 15:54:06.007017  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1114 15:54:06.007076  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.007134  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1114 15:54:06.013121  876396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1114 15:54:06.013141  876396 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.013192  876396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1114 15:54:05.440685  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441307  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441342  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:05.441243  877437 retry.go:31] will retry after 1.84307404s: waiting for machine to come up
	I1114 15:54:07.286027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286581  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286612  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:07.286501  877437 retry.go:31] will retry after 2.149522769s: waiting for machine to come up
	I1114 15:54:09.437500  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.437998  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.438027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:09.437930  877437 retry.go:31] will retry after 1.825733531s: waiting for machine to come up
	I1114 15:54:06.558998  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.056443  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.550292  876220 pod_ready.go:92] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:09.550325  876220 pod_ready.go:81] duration metric: took 6.02152032s waiting for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:09.550338  876220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:07.587512  876396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.574275406s)
	I1114 15:54:07.587549  876396 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1114 15:54:07.587609  876396 cache_images.go:92] LoadImages completed in 2.235719587s
	W1114 15:54:07.587745  876396 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1114 15:54:07.587935  876396 ssh_runner.go:195] Run: crio config
	I1114 15:54:07.677561  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:07.677590  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:07.677624  876396 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:07.677649  876396 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-842105 NodeName:old-k8s-version-842105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 15:54:07.677852  876396 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-842105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-842105
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.151:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:07.677991  876396 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-842105 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:54:07.678072  876396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1114 15:54:07.690041  876396 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:07.690195  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:07.699428  876396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1114 15:54:07.717871  876396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:07.736451  876396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1114 15:54:07.760405  876396 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:07.766002  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:07.782987  876396 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105 for IP: 192.168.72.151
	I1114 15:54:07.783024  876396 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:07.783232  876396 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:07.783328  876396 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:07.783435  876396 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/client.key
	I1114 15:54:07.783530  876396 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key.8e16fdf2
	I1114 15:54:07.783587  876396 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key
	I1114 15:54:07.783733  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:07.783774  876396 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:07.783788  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:07.783825  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:07.783860  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:07.783903  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:07.783976  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:07.784951  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:07.817959  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:07.849497  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:07.882885  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:54:07.917706  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:07.951168  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:07.980449  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:08.004910  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:08.038634  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:08.068999  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:08.099934  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:08.131714  876396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:08.150662  876396 ssh_runner.go:195] Run: openssl version
	I1114 15:54:08.158258  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:08.168218  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173533  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173650  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.179886  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:08.189654  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:08.199563  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204439  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204512  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.210587  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:08.220509  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:08.233859  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240418  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240484  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.248025  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:08.261693  876396 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:08.267518  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:08.275553  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:08.283812  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:08.292063  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:08.299976  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:08.307726  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:08.315248  876396 kubeadm.go:404] StartCluster: {Name:old-k8s-version-842105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:08.315441  876396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:08.315509  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:08.373222  876396 cri.go:89] found id: ""
	I1114 15:54:08.373309  876396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:08.386081  876396 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:08.386113  876396 kubeadm.go:636] restartCluster start
	I1114 15:54:08.386175  876396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:08.398113  876396 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.399779  876396 kubeconfig.go:92] found "old-k8s-version-842105" server: "https://192.168.72.151:8443"
	I1114 15:54:08.403355  876396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:08.415044  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.415107  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.431221  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.431246  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.431301  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.441629  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.941906  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.942002  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.953895  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.442080  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.442167  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.454396  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.941960  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.942060  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.957741  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.442467  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.442585  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.459029  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.942110  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.942218  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.958207  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.441724  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.441846  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.456551  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.942092  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.942207  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.954734  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.265162  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265717  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:11.265645  877437 retry.go:31] will retry after 3.454522942s: waiting for machine to come up
	I1114 15:54:14.722448  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722869  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722900  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:14.722811  877437 retry.go:31] will retry after 4.385736497s: waiting for machine to come up
	I1114 15:54:11.568989  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:11.569021  876220 pod_ready.go:81] duration metric: took 2.018672405s waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:11.569032  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:13.599380  876220 pod_ready.go:102] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:15.095781  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.095806  876220 pod_ready.go:81] duration metric: took 3.52676767s waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.095816  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101837  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.101860  876220 pod_ready.go:81] duration metric: took 6.035008ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101871  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107099  876220 pod_ready.go:92] pod "kube-proxy-j2qnm" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.107119  876220 pod_ready.go:81] duration metric: took 5.239707ms waiting for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107131  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146726  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.146753  876220 pod_ready.go:81] duration metric: took 39.614218ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146765  876220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:12.442685  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.442780  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.456555  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:12.941805  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.941902  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.955572  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.442111  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.442220  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.455769  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.941932  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.942051  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.957167  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.442727  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.442855  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.455220  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.941815  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.941911  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.955030  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.441942  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.442064  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.454228  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.942207  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.942299  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.955845  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.442537  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.442642  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.454339  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.941837  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.941933  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.955292  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:19.110067  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.110621  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Found IP for machine: 192.168.61.196
	I1114 15:54:19.110650  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserving static IP address...
	I1114 15:54:19.110682  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has current primary IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.111082  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.111142  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | skip adding static IP to network mk-default-k8s-diff-port-529430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"}
	I1114 15:54:19.111163  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserved static IP address: 192.168.61.196
	I1114 15:54:19.111178  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for SSH to be available...
	I1114 15:54:19.111191  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Getting to WaitForSSH function...
	I1114 15:54:19.113739  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114145  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.114196  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114327  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH client type: external
	I1114 15:54:19.114358  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa (-rw-------)
	I1114 15:54:19.114395  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:19.114417  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | About to run SSH command:
	I1114 15:54:19.114432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | exit 0
	I1114 15:54:19.213651  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:19.214087  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetConfigRaw
	I1114 15:54:19.214767  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.217678  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218072  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.218099  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218414  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:54:19.218634  876668 machine.go:88] provisioning docker machine ...
	I1114 15:54:19.218662  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:19.218923  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219132  876668 buildroot.go:166] provisioning hostname "default-k8s-diff-port-529430"
	I1114 15:54:19.219155  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219292  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.221719  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222106  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.222129  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222272  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.222435  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222606  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222748  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.222907  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.223312  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.223328  876668 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-529430 && echo "default-k8s-diff-port-529430" | sudo tee /etc/hostname
	I1114 15:54:19.373658  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-529430
	
	I1114 15:54:19.373691  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.376972  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377388  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.377432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377549  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.377754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.377934  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.378123  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.378325  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.378667  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.378685  876668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-529430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-529430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-529430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:19.523410  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:19.523453  876668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:19.523498  876668 buildroot.go:174] setting up certificates
	I1114 15:54:19.523511  876668 provision.go:83] configureAuth start
	I1114 15:54:19.523530  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.523872  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.526757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527213  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.527242  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.530193  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530590  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.530630  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530794  876668 provision.go:138] copyHostCerts
	I1114 15:54:19.530862  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:19.530886  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:19.530965  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:19.531069  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:19.531078  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:19.531104  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:19.531179  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:19.531188  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:19.531218  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:19.531285  876668 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-529430 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube default-k8s-diff-port-529430]
	I1114 15:54:19.845785  876668 provision.go:172] copyRemoteCerts
	I1114 15:54:19.845852  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:19.845880  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.849070  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849461  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.849492  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.849916  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.850139  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.850326  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:19.946041  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:19.976301  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1114 15:54:20.667697  876065 start.go:369] acquired machines lock for "no-preload-490998" in 59.048435079s
	I1114 15:54:20.667765  876065 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:54:20.667776  876065 fix.go:54] fixHost starting: 
	I1114 15:54:20.668233  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:20.668278  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:20.689041  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I1114 15:54:20.689574  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:20.690138  876065 main.go:141] libmachine: Using API Version  1
	I1114 15:54:20.690168  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:20.690554  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:20.690760  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:20.690909  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 15:54:20.692627  876065 fix.go:102] recreateIfNeeded on no-preload-490998: state=Stopped err=<nil>
	I1114 15:54:20.692652  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	W1114 15:54:20.692849  876065 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:54:20.694674  876065 out.go:177] * Restarting existing kvm2 VM for "no-preload-490998" ...
	I1114 15:54:17.454958  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:19.455250  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:20.001972  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:54:20.026531  876668 provision.go:86] duration metric: configureAuth took 502.998106ms
	I1114 15:54:20.026585  876668 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:20.026832  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:20.026965  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.030385  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.030791  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030974  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.031200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031423  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.031861  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.032341  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.032367  876668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:20.394771  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:20.394805  876668 machine.go:91] provisioned docker machine in 1.176155811s
	I1114 15:54:20.394818  876668 start.go:300] post-start starting for "default-k8s-diff-port-529430" (driver="kvm2")
	I1114 15:54:20.394832  876668 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:20.394853  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.395240  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:20.395288  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.398478  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.398906  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.398945  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.399107  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.399344  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.399584  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.399752  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.491251  876668 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:20.495507  876668 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:20.495538  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:20.495627  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:20.495718  876668 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:20.495814  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:20.504112  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:20.527100  876668 start.go:303] post-start completed in 132.264495ms
	I1114 15:54:20.527124  876668 fix.go:56] fixHost completed within 21.989733182s
	I1114 15:54:20.527150  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.530055  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530460  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.530502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530660  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.530868  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531069  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531281  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.531458  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.531874  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.531889  876668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:20.667502  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977260.612374456
	
	I1114 15:54:20.667529  876668 fix.go:206] guest clock: 1699977260.612374456
	I1114 15:54:20.667536  876668 fix.go:219] Guest: 2023-11-14 15:54:20.612374456 +0000 UTC Remote: 2023-11-14 15:54:20.527127621 +0000 UTC m=+270.585277055 (delta=85.246835ms)
	I1114 15:54:20.667591  876668 fix.go:190] guest clock delta is within tolerance: 85.246835ms
	I1114 15:54:20.667604  876668 start.go:83] releasing machines lock for "default-k8s-diff-port-529430", held for 22.130251397s
	I1114 15:54:20.667642  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.668017  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:20.671690  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672166  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.672199  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672583  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673190  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673412  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673507  876668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:20.673573  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.673677  876668 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:20.673702  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.677394  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677505  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677813  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.677847  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678133  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.678165  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678228  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678331  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678456  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678543  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678799  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.679008  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.770378  876668 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:20.799026  876668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:20.952410  876668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:20.960020  876668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:20.960164  876668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:20.976497  876668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:20.976537  876668 start.go:472] detecting cgroup driver to use...
	I1114 15:54:20.976623  876668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:20.995510  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:21.008750  876668 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:21.008824  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:21.021811  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:21.035329  876668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:21.148775  876668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:21.285242  876668 docker.go:219] disabling docker service ...
	I1114 15:54:21.285318  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:21.298782  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:21.316123  876668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:21.488090  876668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:21.618889  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:21.632974  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:21.655781  876668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:21.655882  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.669231  876668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:21.669316  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.678786  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.688193  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.698797  876668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:21.709360  876668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:21.718312  876668 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:21.718380  876668 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:21.736502  876668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:21.746439  876668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:21.863214  876668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:22.102179  876668 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:22.102265  876668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:22.108046  876668 start.go:540] Will wait 60s for crictl version
	I1114 15:54:22.108121  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:54:22.113795  876668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:22.165127  876668 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:22.165229  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.225931  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.294400  876668 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:17.442023  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.442115  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.454984  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:17.942288  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.942367  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.954587  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:18.415437  876396 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:18.415476  876396 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:18.415510  876396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:18.415594  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:18.457148  876396 cri.go:89] found id: ""
	I1114 15:54:18.457220  876396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:18.473763  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:18.482554  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:18.482618  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491282  876396 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491331  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:18.611750  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.639893  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.02808682s)
	I1114 15:54:19.639964  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.850775  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.939183  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:20.055296  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:20.055384  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.076978  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.591616  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.091982  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.591312  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.635294  876396 api_server.go:72] duration metric: took 1.579988958s to wait for apiserver process to appear ...
	I1114 15:54:21.635323  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:21.635345  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:20.696162  876065 main.go:141] libmachine: (no-preload-490998) Calling .Start
	I1114 15:54:20.696380  876065 main.go:141] libmachine: (no-preload-490998) Ensuring networks are active...
	I1114 15:54:20.697208  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network default is active
	I1114 15:54:20.697665  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network mk-no-preload-490998 is active
	I1114 15:54:20.698105  876065 main.go:141] libmachine: (no-preload-490998) Getting domain xml...
	I1114 15:54:20.698815  876065 main.go:141] libmachine: (no-preload-490998) Creating domain...
	I1114 15:54:22.152078  876065 main.go:141] libmachine: (no-preload-490998) Waiting to get IP...
	I1114 15:54:22.153475  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.153983  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.154071  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.153960  877583 retry.go:31] will retry after 305.242943ms: waiting for machine to come up
	I1114 15:54:22.460636  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.461432  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.461609  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.461568  877583 retry.go:31] will retry after 354.226558ms: waiting for machine to come up
	I1114 15:54:22.817225  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.817884  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.817999  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.817955  877583 retry.go:31] will retry after 337.727596ms: waiting for machine to come up
	I1114 15:54:23.157897  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.158614  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.158724  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.158679  877583 retry.go:31] will retry after 375.356441ms: waiting for machine to come up
	I1114 15:54:23.536061  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.536607  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.536633  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.536565  877583 retry.go:31] will retry after 652.853452ms: waiting for machine to come up
	I1114 15:54:22.295757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:22.299345  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.299749  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:22.299788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.300017  876668 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:22.305363  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:22.318715  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:22.318773  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:22.368522  876668 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:22.368595  876668 ssh_runner.go:195] Run: which lz4
	I1114 15:54:22.373798  876668 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:22.379337  876668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:22.379368  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:54:24.194028  876668 crio.go:444] Took 1.820276 seconds to copy over tarball
	I1114 15:54:24.194111  876668 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:21.457059  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:23.458432  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:26.636325  876396 api_server.go:269] stopped: https://192.168.72.151:8443/healthz: Get "https://192.168.72.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1114 15:54:26.636396  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:24.191080  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:24.191648  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:24.191685  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:24.191565  877583 retry.go:31] will retry after 883.93292ms: waiting for machine to come up
	I1114 15:54:25.076820  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:25.077325  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:25.077370  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:25.077290  877583 retry.go:31] will retry after 1.071889504s: waiting for machine to come up
	I1114 15:54:26.151239  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:26.151777  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:26.151812  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:26.151734  877583 retry.go:31] will retry after 1.05055701s: waiting for machine to come up
	I1114 15:54:27.204714  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:27.205193  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:27.205216  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:27.205147  877583 retry.go:31] will retry after 1.366779273s: waiting for machine to come up
	I1114 15:54:28.573131  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:28.573578  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:28.573605  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:28.573548  877583 retry.go:31] will retry after 1.629033633s: waiting for machine to come up
	I1114 15:54:27.635092  876668 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.440943465s)
	I1114 15:54:27.635134  876668 crio.go:451] Took 3.441078 seconds to extract the tarball
	I1114 15:54:27.635148  876668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:27.685486  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:27.742411  876668 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:54:27.742499  876668 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:54:27.742596  876668 ssh_runner.go:195] Run: crio config
	I1114 15:54:27.815555  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:27.815579  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:27.815601  876668 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:27.815624  876668 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-529430 NodeName:default-k8s-diff-port-529430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:54:27.815789  876668 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-529430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:27.815921  876668 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-529430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1114 15:54:27.815999  876668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:54:27.825716  876668 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:27.825799  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:27.838987  876668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1114 15:54:27.855187  876668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:27.872995  876668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1114 15:54:27.890455  876668 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:27.895678  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:27.909953  876668 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430 for IP: 192.168.61.196
	I1114 15:54:27.909999  876668 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:27.910204  876668 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:27.910271  876668 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:27.910463  876668 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/client.key
	I1114 15:54:27.910558  876668 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key.0d67e2f2
	I1114 15:54:27.910616  876668 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key
	I1114 15:54:27.910753  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:27.910797  876668 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:27.910811  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:27.910872  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:27.910917  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:27.910950  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:27.911007  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:27.911985  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:27.937341  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:27.963511  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:27.990011  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:54:28.016668  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:28.048528  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:28.077392  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:28.107784  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:28.136600  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:28.163995  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:28.191715  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:28.223205  876668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:28.243672  876668 ssh_runner.go:195] Run: openssl version
	I1114 15:54:28.249895  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:28.260568  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266792  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266887  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.273048  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:28.283458  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:28.294810  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300316  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300384  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.306193  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:28.319260  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:28.332843  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339044  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339120  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.346094  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:28.359711  876668 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:28.365300  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:28.372965  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:28.380378  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:28.387801  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:28.395228  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:28.401252  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:28.407435  876668 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:28.407581  876668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:28.407663  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:28.462877  876668 cri.go:89] found id: ""
	I1114 15:54:28.462962  876668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:28.473800  876668 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:28.473828  876668 kubeadm.go:636] restartCluster start
	I1114 15:54:28.473885  876668 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:28.485255  876668 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.486649  876668 kubeconfig.go:92] found "default-k8s-diff-port-529430" server: "https://192.168.61.196:8444"
	I1114 15:54:28.489408  876668 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:28.499927  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.499990  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.512175  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.512193  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.512238  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.524128  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.025143  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.025234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.040757  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.525035  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.525153  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.538214  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.174172  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:28.174207  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:28.674934  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.145414  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.145459  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.174596  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.231115  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.231157  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.674653  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.813013  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.813052  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.174424  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.183371  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:30.183427  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.675007  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.686069  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:54:30.697376  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:54:30.697472  876396 api_server.go:131] duration metric: took 9.062139934s to wait for apiserver health ...
	I1114 15:54:30.697503  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:30.697535  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:30.699476  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:25.957052  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:28.490572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:30.701025  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:30.729153  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:30.770856  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:30.785989  876396 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:30.786041  876396 system_pods.go:61] "coredns-5644d7b6d9-dxtd8" [4d22eb1f-551c-49a1-a519-7420c3774e46] Running
	I1114 15:54:30.786051  876396 system_pods.go:61] "etcd-old-k8s-version-842105" [d4d5d869-b609-4017-8cf1-071b11f69d18] Running
	I1114 15:54:30.786057  876396 system_pods.go:61] "kube-apiserver-old-k8s-version-842105" [43e84141-4938-4808-bba5-14080a0a7b9e] Running
	I1114 15:54:30.786063  876396 system_pods.go:61] "kube-controller-manager-old-k8s-version-842105" [8fca7797-f3a1-4223-a921-0819aca95ce7] Running
	I1114 15:54:30.786069  876396 system_pods.go:61] "kube-proxy-kw2ns" [c6b5fbe3-a473-4120-bc41-fb85f6d3841d] Running
	I1114 15:54:30.786074  876396 system_pods.go:61] "kube-scheduler-old-k8s-version-842105" [c9cad8bb-b7a9-44fd-92d3-d3360284c9f3] Running
	I1114 15:54:30.786082  876396 system_pods.go:61] "metrics-server-74d5856cc6-q9hc5" [1333b6de-5f3f-4937-8e73-d2b7f2c6d37e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:30.786091  876396 system_pods.go:61] "storage-provisioner" [2d95ef7e-626e-4840-9f5d-708cd8c66576] Running
	I1114 15:54:30.786107  876396 system_pods.go:74] duration metric: took 15.207693ms to wait for pod list to return data ...
	I1114 15:54:30.786125  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:30.799034  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:30.799089  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:30.799105  876396 node_conditions.go:105] duration metric: took 12.974469ms to run NodePressure ...
	I1114 15:54:30.799137  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:31.065040  876396 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:31.068697  876396 retry.go:31] will retry after 147.435912ms: kubelet not initialised
	I1114 15:54:31.225671  876396 retry.go:31] will retry after 334.031544ms: kubelet not initialised
	I1114 15:54:31.565487  876396 retry.go:31] will retry after 641.328262ms: kubelet not initialised
	I1114 15:54:32.215327  876396 retry.go:31] will retry after 1.211422414s: kubelet not initialised
	I1114 15:54:30.204276  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:30.204775  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:30.204811  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:30.204713  877583 retry.go:31] will retry after 1.909641151s: waiting for machine to come up
	I1114 15:54:32.115658  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:32.116175  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:32.116209  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:32.116116  877583 retry.go:31] will retry after 3.266336566s: waiting for machine to come up
	I1114 15:54:30.024900  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.025024  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.041104  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.524842  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.524920  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.540643  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.025166  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.040723  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.525252  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.525364  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.537978  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.024495  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.024626  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.037625  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.524934  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.525053  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.540579  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.025237  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.025366  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.037675  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.524206  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.524300  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.537100  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.025150  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.039435  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.525030  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.525140  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.541014  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.957869  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.458285  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:35.458815  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.432677  876396 retry.go:31] will retry after 864.36813ms: kubelet not initialised
	I1114 15:54:34.302450  876396 retry.go:31] will retry after 2.833071739s: kubelet not initialised
	I1114 15:54:37.142128  876396 retry.go:31] will retry after 2.880672349s: kubelet not initialised
	I1114 15:54:35.386010  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:35.386483  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:35.386526  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:35.386417  877583 retry.go:31] will retry after 3.791360608s: waiting for machine to come up
	I1114 15:54:35.024814  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.024924  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.038035  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:35.524433  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.524540  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.538065  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.024585  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.024690  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.036540  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.525201  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.525293  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.537751  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.024292  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.024388  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.037480  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.525115  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.525234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.538365  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.025002  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:38.025148  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:38.036994  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.500770  876668 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:38.500813  876668 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:38.500860  876668 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:38.500951  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:38.538468  876668 cri.go:89] found id: ""
	I1114 15:54:38.538571  876668 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:38.554809  876668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:38.563961  876668 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:38.564025  876668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572905  876668 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572930  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:38.694403  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.614869  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.815977  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.914051  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:37.956992  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.957705  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.179165  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.179746  876065 main.go:141] libmachine: (no-preload-490998) Found IP for machine: 192.168.50.251
	I1114 15:54:39.179773  876065 main.go:141] libmachine: (no-preload-490998) Reserving static IP address...
	I1114 15:54:39.179792  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has current primary IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.180259  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.180295  876065 main.go:141] libmachine: (no-preload-490998) Reserved static IP address: 192.168.50.251
	I1114 15:54:39.180328  876065 main.go:141] libmachine: (no-preload-490998) DBG | skip adding static IP to network mk-no-preload-490998 - found existing host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"}
	I1114 15:54:39.180349  876065 main.go:141] libmachine: (no-preload-490998) DBG | Getting to WaitForSSH function...
	I1114 15:54:39.180368  876065 main.go:141] libmachine: (no-preload-490998) Waiting for SSH to be available...
	I1114 15:54:39.182637  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183005  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.183037  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183157  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH client type: external
	I1114 15:54:39.183185  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa (-rw-------)
	I1114 15:54:39.183218  876065 main.go:141] libmachine: (no-preload-490998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:39.183239  876065 main.go:141] libmachine: (no-preload-490998) DBG | About to run SSH command:
	I1114 15:54:39.183251  876065 main.go:141] libmachine: (no-preload-490998) DBG | exit 0
	I1114 15:54:39.276793  876065 main.go:141] libmachine: (no-preload-490998) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:39.277095  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetConfigRaw
	I1114 15:54:39.277799  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.281002  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281360  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.281393  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281696  876065 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/config.json ...
	I1114 15:54:39.281970  876065 machine.go:88] provisioning docker machine ...
	I1114 15:54:39.281997  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:39.282236  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282395  876065 buildroot.go:166] provisioning hostname "no-preload-490998"
	I1114 15:54:39.282416  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.285099  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285498  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.285527  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285695  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.285865  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286026  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286277  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.286523  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.286978  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.287007  876065 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-490998 && echo "no-preload-490998" | sudo tee /etc/hostname
	I1114 15:54:39.419452  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-490998
	
	I1114 15:54:39.419493  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.422544  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.422912  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.422951  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.423134  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.423360  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423591  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423756  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.423915  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.424324  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.424363  876065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-490998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-490998/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-490998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:39.552044  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:39.552085  876065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:39.552106  876065 buildroot.go:174] setting up certificates
	I1114 15:54:39.552118  876065 provision.go:83] configureAuth start
	I1114 15:54:39.552127  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.552438  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.555275  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555660  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.555771  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555936  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.558628  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559004  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.559042  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559181  876065 provision.go:138] copyHostCerts
	I1114 15:54:39.559247  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:39.559273  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:39.559337  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:39.559498  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:39.559512  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:39.559547  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:39.559612  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:39.559620  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:39.559644  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:39.559697  876065 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.no-preload-490998 san=[192.168.50.251 192.168.50.251 localhost 127.0.0.1 minikube no-preload-490998]
	I1114 15:54:39.728218  876065 provision.go:172] copyRemoteCerts
	I1114 15:54:39.728286  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:39.728314  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.731482  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.731920  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.731966  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.732138  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.732376  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.732605  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.732802  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:39.819537  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:39.848716  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 15:54:39.876339  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:54:39.917428  876065 provision.go:86] duration metric: configureAuth took 365.293803ms
	I1114 15:54:39.917461  876065 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:39.917686  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:39.917783  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.920823  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921417  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.921457  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921785  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.921989  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922170  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922351  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.922516  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.922992  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.923017  876065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:40.270821  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:40.270851  876065 machine.go:91] provisioned docker machine in 988.864728ms
	I1114 15:54:40.270865  876065 start.go:300] post-start starting for "no-preload-490998" (driver="kvm2")
	I1114 15:54:40.270878  876065 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:40.270910  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.271296  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:40.271331  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.274197  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274517  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.274547  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274784  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.275045  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.275209  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.275379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.363810  876065 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:40.368485  876065 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:40.368515  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:40.368599  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:40.368688  876065 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:40.368820  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:40.378691  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:40.401789  876065 start.go:303] post-start completed in 130.90895ms
	I1114 15:54:40.401816  876065 fix.go:56] fixHost completed within 19.734039545s
	I1114 15:54:40.401848  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.404413  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404791  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.404824  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404962  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.405212  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405442  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405614  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.405840  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:40.406318  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:40.406338  876065 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:40.521875  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977280.490539427
	
	I1114 15:54:40.521907  876065 fix.go:206] guest clock: 1699977280.490539427
	I1114 15:54:40.521917  876065 fix.go:219] Guest: 2023-11-14 15:54:40.490539427 +0000 UTC Remote: 2023-11-14 15:54:40.401821935 +0000 UTC m=+361.372113130 (delta=88.717492ms)
	I1114 15:54:40.521945  876065 fix.go:190] guest clock delta is within tolerance: 88.717492ms
	I1114 15:54:40.521952  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 19.854220019s
	I1114 15:54:40.521990  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.522294  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:40.525204  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525567  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.525611  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525786  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526412  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526589  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526682  876065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:40.526727  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.526847  876065 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:40.526881  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.529470  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529673  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529863  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.529895  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530047  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530189  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.530224  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530226  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530415  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530480  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530594  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530677  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.530726  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530881  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.634647  876065 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:40.641680  876065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:40.784919  876065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:40.791364  876065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:40.791466  876065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:40.814464  876065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:40.814496  876065 start.go:472] detecting cgroup driver to use...
	I1114 15:54:40.814608  876065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:40.834599  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:40.851666  876065 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:40.851761  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:40.870359  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:40.885345  876065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:41.042220  876065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:41.174015  876065 docker.go:219] disabling docker service ...
	I1114 15:54:41.174101  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:41.188849  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:41.201322  876065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:41.329124  876065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:41.456116  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:41.477162  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:41.497860  876065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:41.497932  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.509750  876065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:41.509843  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.521944  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.532916  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.545469  876065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:41.556976  876065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:41.567322  876065 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:41.567401  876065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:41.583043  876065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:41.593941  876065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:41.717384  876065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:41.907278  876065 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:41.907351  876065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:41.912763  876065 start.go:540] Will wait 60s for crictl version
	I1114 15:54:41.912843  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:41.917105  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:41.965326  876065 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:41.965418  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.016065  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.079721  876065 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:40.028538  876396 retry.go:31] will retry after 2.943912692s: kubelet not initialised
	I1114 15:54:42.081301  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:42.084358  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.084771  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:42.084805  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.085014  876065 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:42.089551  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:42.102676  876065 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:42.102730  876065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:42.145434  876065 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:42.145479  876065 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:42.145570  876065 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.145592  876065 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.145621  876065 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.145620  876065 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.145662  876065 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1114 15:54:42.145692  876065 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.145819  876065 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.145564  876065 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.147966  876065 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.147967  876065 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.148056  876065 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1114 15:54:42.147970  876065 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.148093  876065 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.147960  876065 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.318368  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1114 15:54:42.318578  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.325647  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.340363  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.375378  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.473131  876065 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1114 15:54:42.473195  876065 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.473202  876065 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1114 15:54:42.473235  876065 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.473253  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.473283  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.511600  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.554432  876065 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1114 15:54:42.554502  876065 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1114 15:54:42.554572  876065 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.554599  876065 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1114 15:54:42.554618  876065 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.554632  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554657  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554532  876065 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.554724  876065 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1114 15:54:42.554750  876065 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.554776  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554778  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554907  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.554969  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.576922  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.577004  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.577114  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.577535  876065 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1114 15:54:42.577591  876065 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.577631  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.655186  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.655318  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1114 15:54:42.655449  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1114 15:54:42.655473  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:42.655536  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.706186  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1114 15:54:42.706257  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.706283  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1114 15:54:42.706304  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:42.706372  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:42.706408  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1114 15:54:42.706548  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:42.737003  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1114 15:54:42.737032  876065 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737093  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737102  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1114 15:54:42.737179  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1114 15:54:42.737237  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:42.769211  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1114 15:54:42.769251  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1114 15:54:42.769304  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1114 15:54:42.769289  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1114 15:54:42.769428  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:54:44.006164  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.268897316s)
	I1114 15:54:44.006206  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1114 15:54:44.006240  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.236783751s)
	I1114 15:54:44.006275  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1114 15:54:44.006283  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.269163879s)
	I1114 15:54:44.006297  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1114 15:54:44.006322  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:44.006375  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:40.016931  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:40.017030  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.030798  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.541996  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.042023  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.542537  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.042880  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.542514  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.577021  876668 api_server.go:72] duration metric: took 2.560093027s to wait for apiserver process to appear ...
	I1114 15:54:42.577059  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:42.577088  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.577767  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:42.577805  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.578225  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:43.078953  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.457425  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:44.460290  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:42.978588  876396 retry.go:31] will retry after 5.776997827s: kubelet not initialised
	I1114 15:54:46.326192  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.326231  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.326249  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.390609  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.390668  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.579140  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.590569  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:46.590606  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.079186  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.084460  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.084483  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.578774  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.588878  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.588919  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:48.079047  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:48.084809  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:54:48.098877  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:48.098941  876668 api_server.go:131] duration metric: took 5.521873886s to wait for apiserver health ...
	I1114 15:54:48.098955  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:48.098972  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:48.101010  876668 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:47.219243  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.212835904s)
	I1114 15:54:47.219281  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1114 15:54:47.219308  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:47.219472  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:48.102440  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:48.154163  876668 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:48.212336  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:48.229819  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:48.229862  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:48.229874  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:48.229886  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:48.229896  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:48.229905  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:48.229913  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:48.229923  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:48.229934  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:48.229944  876668 system_pods.go:74] duration metric: took 17.577706ms to wait for pod list to return data ...
	I1114 15:54:48.229961  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:48.236002  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:48.236043  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:48.236057  876668 node_conditions.go:105] duration metric: took 6.089691ms to run NodePressure ...
	I1114 15:54:48.236093  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:48.608191  876668 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622192  876668 kubeadm.go:787] kubelet initialised
	I1114 15:54:48.622221  876668 kubeadm.go:788] duration metric: took 13.999979ms waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622232  876668 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:48.629670  876668 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.636566  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636594  876668 pod_ready.go:81] duration metric: took 6.892422ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.636611  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636619  876668 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.643982  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644013  876668 pod_ready.go:81] duration metric: took 7.383826ms waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.644030  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644037  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.649791  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649815  876668 pod_ready.go:81] duration metric: took 5.769971ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.649825  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649833  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.655071  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655100  876668 pod_ready.go:81] duration metric: took 5.259243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.655113  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655121  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.018817  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018849  876668 pod_ready.go:81] duration metric: took 363.719341ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.018863  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018872  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.417556  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417588  876668 pod_ready.go:81] duration metric: took 398.704259ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.417600  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417607  876668 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.816654  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816692  876668 pod_ready.go:81] duration metric: took 399.075859ms waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.816712  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816721  876668 pod_ready.go:38] duration metric: took 1.194471296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:49.816765  876668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:54:49.830335  876668 ops.go:34] apiserver oom_adj: -16
	I1114 15:54:49.830363  876668 kubeadm.go:640] restartCluster took 21.356528166s
	I1114 15:54:49.830372  876668 kubeadm.go:406] StartCluster complete in 21.422955285s
	I1114 15:54:49.830390  876668 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.830502  876668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:54:49.832470  876668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.859435  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:54:49.859707  876668 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:54:49.859810  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:49.859852  876668 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859873  876668 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859885  876668 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-529430"
	I1114 15:54:49.859892  876668 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-529430"
	W1114 15:54:49.859895  876668 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:54:49.859954  876668 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859973  876668 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.859981  876668 addons.go:240] addon metrics-server should already be in state true
	I1114 15:54:49.860025  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.859956  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.860306  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860345  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860438  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860452  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860489  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860491  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.866006  876668 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-529430" context rescaled to 1 replicas
	I1114 15:54:49.866053  876668 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:54:49.878650  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I1114 15:54:49.878976  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I1114 15:54:49.879627  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I1114 15:54:49.891649  876668 out.go:177] * Verifying Kubernetes components...
	I1114 15:54:49.893450  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:54:49.892232  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892275  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892329  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.894259  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894282  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894473  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894486  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894610  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894623  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894687  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894892  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.894952  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894993  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.895598  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.895642  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.896296  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.896321  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.899095  876668 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.899120  876668 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:54:49.899151  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.899576  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.899622  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.917834  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I1114 15:54:49.917842  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I1114 15:54:49.918442  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.918505  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.919007  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919026  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919167  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919187  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919493  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919562  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919803  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.920191  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.920237  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.922764  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1114 15:54:49.922969  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.924925  876668 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:49.923380  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.926603  876668 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:49.926625  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:54:49.926647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.927991  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.928012  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.928459  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.928683  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.930696  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.930740  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.931131  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.931154  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.931330  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.931491  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.931647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.931775  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.934128  876668 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:54:49.936007  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:54:49.936031  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:54:49.936056  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.939725  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.939782  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I1114 15:54:49.940336  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.940442  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.940467  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.940822  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.941060  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.941093  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.941095  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.941211  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.941388  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.941856  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.942057  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.943639  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.943972  876668 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:49.943991  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:54:49.944009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.947172  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947631  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.947663  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947902  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.948102  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.948278  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.948579  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:46.955010  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:48.955172  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:50.066801  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:50.084526  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:54:50.084555  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:54:50.145315  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:50.145671  876668 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:50.146084  876668 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 15:54:50.151627  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:54:50.151646  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:54:50.216318  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:50.216349  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:54:50.316434  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:51.787528  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.642164298s)
	I1114 15:54:51.787644  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787672  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.787695  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.720847981s)
	I1114 15:54:51.787744  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788039  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788064  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788075  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788086  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788094  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788109  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788119  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790294  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790322  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.790327  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790349  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.803844  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.803875  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.804205  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.804238  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.804239  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.925929  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.609443677s)
	I1114 15:54:51.926001  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926019  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926385  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926429  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.926456  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926468  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926483  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926795  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926814  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926826  876668 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-529430"
	I1114 15:54:51.926829  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:52.146969  876668 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1114 15:54:48.761692  876396 retry.go:31] will retry after 7.067385779s: kubelet not initialised
	I1114 15:54:50.000157  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.780649338s)
	I1114 15:54:50.000194  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1114 15:54:50.000227  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:50.000281  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:52.291215  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (2.290903759s)
	I1114 15:54:52.291244  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1114 15:54:52.291271  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:52.291312  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:53.739008  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.447671823s)
	I1114 15:54:53.739041  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1114 15:54:53.739066  876065 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:53.739126  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:52.194351  876668 addons.go:502] enable addons completed in 2.33463136s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1114 15:54:52.220203  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:54.220773  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:50.957159  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:53.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.458026  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.834422  876396 retry.go:31] will retry after 18.847542128s: kubelet not initialised
	I1114 15:54:56.221753  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:56.720961  876668 node_ready.go:49] node "default-k8s-diff-port-529430" has status "Ready":"True"
	I1114 15:54:56.720989  876668 node_ready.go:38] duration metric: took 6.575288694s waiting for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:56.721001  876668 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:56.730382  876668 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736722  876668 pod_ready.go:92] pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:56.736761  876668 pod_ready.go:81] duration metric: took 6.345209ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736774  876668 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:58.773825  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:57.458580  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:59.956188  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:01.061681  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.322513643s)
	I1114 15:55:01.061716  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1114 15:55:01.061753  876065 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.061812  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.811277  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1114 15:55:01.811342  876065 cache_images.go:123] Successfully loaded all cached images
	I1114 15:55:01.811352  876065 cache_images.go:92] LoadImages completed in 19.665858366s
	I1114 15:55:01.811461  876065 ssh_runner.go:195] Run: crio config
	I1114 15:55:01.881576  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:01.881603  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:01.881622  876065 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:55:01.881646  876065 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-490998 NodeName:no-preload-490998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:55:01.881781  876065 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-490998"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:55:01.881859  876065 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-490998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:55:01.881918  876065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:55:01.892613  876065 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:55:01.892696  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:55:01.902267  876065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1114 15:55:01.919728  876065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:55:01.936188  876065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1114 15:55:01.954510  876065 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I1114 15:55:01.958337  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:55:01.970290  876065 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998 for IP: 192.168.50.251
	I1114 15:55:01.970328  876065 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:55:01.970513  876065 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:55:01.970563  876065 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:55:01.970662  876065 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/client.key
	I1114 15:55:01.970794  876065 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key.6b358a63
	I1114 15:55:01.970857  876065 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key
	I1114 15:55:01.971003  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:55:01.971065  876065 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:55:01.971079  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:55:01.971123  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:55:01.971160  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:55:01.971192  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:55:01.971252  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:55:01.972129  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:55:01.996012  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:55:02.020778  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:55:02.044395  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:55:02.066866  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:55:02.089331  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:55:02.113148  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:55:02.136083  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:55:02.157833  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:55:02.181150  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:55:02.203155  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:55:02.225839  876065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:55:02.243335  876065 ssh_runner.go:195] Run: openssl version
	I1114 15:55:02.249465  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:55:02.259874  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264340  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264401  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.270441  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:55:02.282031  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:55:02.293297  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298093  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298155  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.303668  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:55:02.315423  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:55:02.325976  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332124  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332194  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.339377  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:55:02.350318  876065 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:55:02.354796  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:55:02.360867  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:55:02.366306  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:55:02.372186  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:55:02.377900  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:55:02.383519  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:55:02.389128  876065 kubeadm.go:404] StartCluster: {Name:no-preload-490998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:55:02.389229  876065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:55:02.389304  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:02.428473  876065 cri.go:89] found id: ""
	I1114 15:55:02.428578  876065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:55:02.439944  876065 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:55:02.439969  876065 kubeadm.go:636] restartCluster start
	I1114 15:55:02.440079  876065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:55:02.450025  876065 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.451533  876065 kubeconfig.go:92] found "no-preload-490998" server: "https://192.168.50.251:8443"
	I1114 15:55:02.454290  876065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:55:02.463352  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.463410  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.474007  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.474025  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.474065  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.484826  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.985519  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.985595  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.998224  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.485905  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.486059  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.499281  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.985805  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.985925  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.998086  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:00.819591  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:02.773550  876668 pod_ready.go:92] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.773573  876668 pod_ready.go:81] duration metric: took 6.036790568s waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.773582  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778746  876668 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.778764  876668 pod_ready.go:81] duration metric: took 5.176465ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778772  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784332  876668 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.784353  876668 pod_ready.go:81] duration metric: took 5.572815ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784366  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789492  876668 pod_ready.go:92] pod "kube-proxy-zpchs" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.789514  876668 pod_ready.go:81] duration metric: took 5.139759ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789524  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796606  876668 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.796628  876668 pod_ready.go:81] duration metric: took 7.097079ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796639  876668 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.454894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.956449  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.485284  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.485387  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.498240  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:04.985846  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.985936  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.998901  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.485250  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.485365  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.497261  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.985411  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.985511  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.997656  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.485227  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.485332  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.497310  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.985893  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.985977  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.997585  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.485903  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.486001  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.498532  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.985881  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.985958  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.997898  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.485400  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.485512  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.497446  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.985912  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.986015  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.998121  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.081742  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:07.082515  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.580987  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:06.957307  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.455227  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.485641  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.485735  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.498347  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:09.985970  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.986073  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.997958  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.485503  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.485600  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.497407  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.985577  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.985655  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.998624  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.485146  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.485250  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.497837  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.985423  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.985551  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.997959  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:12.464381  876065 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:55:12.464449  876065 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:55:12.464478  876065 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:55:12.464582  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:12.505435  876065 cri.go:89] found id: ""
	I1114 15:55:12.505532  876065 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:55:12.522470  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:55:12.532890  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:55:12.532982  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542115  876065 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542141  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:12.684875  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:13.897464  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.21254145s)
	I1114 15:55:13.897509  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:11.582332  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.085102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:11.955438  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.455506  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.687822  876396 kubeadm.go:787] kubelet initialised
	I1114 15:55:14.687849  876396 kubeadm.go:788] duration metric: took 43.622781532s waiting for restarted kubelet to initialise ...
	I1114 15:55:14.687857  876396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:14.693560  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698796  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.698819  876396 pod_ready.go:81] duration metric: took 5.232669ms waiting for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698828  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703879  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.703903  876396 pod_ready.go:81] duration metric: took 5.067006ms waiting for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703916  876396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708064  876396 pod_ready.go:92] pod "etcd-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.708093  876396 pod_ready.go:81] duration metric: took 4.168333ms waiting for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708106  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713030  876396 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.713055  876396 pod_ready.go:81] duration metric: took 4.939899ms waiting for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713067  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087824  876396 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.087857  876396 pod_ready.go:81] duration metric: took 374.780312ms waiting for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087873  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.486984  876396 pod_ready.go:92] pod "kube-proxy-kw2ns" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.487011  876396 pod_ready.go:81] duration metric: took 399.130772ms waiting for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.487020  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886624  876396 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.886658  876396 pod_ready.go:81] duration metric: took 399.628757ms waiting for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886671  876396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.096314  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.174495  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.254647  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:55:14.254765  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.273596  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.788350  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.288506  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.788580  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.288476  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.787853  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.816380  876065 api_server.go:72] duration metric: took 2.561735945s to wait for apiserver process to appear ...
	I1114 15:55:16.816408  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:55:16.816428  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:16.582309  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:18.584599  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:16.957605  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:19.457613  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.541438  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.541473  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:20.541490  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:20.582790  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.582838  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:21.083891  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.089625  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.089658  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:21.583184  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.599539  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.599576  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:22.083098  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:22.088480  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 15:55:22.096517  876065 api_server.go:141] control plane version: v1.28.3
	I1114 15:55:22.096545  876065 api_server.go:131] duration metric: took 5.280130119s to wait for apiserver health ...
	I1114 15:55:22.096558  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:22.096568  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:22.098612  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:55:18.194723  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.195126  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.196472  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.100184  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:55:22.125049  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:55:22.150893  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:55:22.163922  876065 system_pods.go:59] 8 kube-system pods found
	I1114 15:55:22.163958  876065 system_pods.go:61] "coredns-5dd5756b68-n77fz" [e2f5ce73-a65e-40da-b554-c929f093a1a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:55:22.163970  876065 system_pods.go:61] "etcd-no-preload-490998" [01e272b5-4463-431d-8ed1-f561a90b667d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:55:22.163983  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [529f79fd-eae5-44e9-971d-b3ecb5ed025d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:55:22.163989  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ea299234-2456-4171-bac0-8e8ff4998596] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:55:22.163994  876065 system_pods.go:61] "kube-proxy-6hqk5" [7233dd72-138c-4148-834b-2dcb83a4cf00] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:55:22.163999  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [666e8a03-50b1-4b08-84f3-c3c6ec8a5452] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:55:22.164005  876065 system_pods.go:61] "metrics-server-57f55c9bc5-6lg6h" [7afa1e38-c64c-4d03-9b00-5765e7e251ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:55:22.164036  876065 system_pods.go:61] "storage-provisioner" [1090ed8a-6424-4980-9ea7-b43e998d1eb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:55:22.164050  876065 system_pods.go:74] duration metric: took 13.132475ms to wait for pod list to return data ...
	I1114 15:55:22.164058  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:55:22.167930  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:55:22.168020  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 15:55:22.168033  876065 node_conditions.go:105] duration metric: took 3.969303ms to run NodePressure ...
	I1114 15:55:22.168055  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:22.456975  876065 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470174  876065 kubeadm.go:787] kubelet initialised
	I1114 15:55:22.470202  876065 kubeadm.go:788] duration metric: took 13.201285ms waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470216  876065 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:22.483150  876065 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:21.081628  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:23.083015  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:21.955808  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.455829  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.696004  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.195514  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.514847  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.519442  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.013526  876065 pod_ready.go:92] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:27.013584  876065 pod_ready.go:81] duration metric: took 4.530407487s waiting for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:27.013600  876065 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:29.032979  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:25.582366  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.080716  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.456123  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.955087  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:29.694646  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.194401  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:31.033810  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:33.033026  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.033058  876065 pod_ready.go:81] duration metric: took 6.019448696s waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.033071  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039148  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.039180  876065 pod_ready.go:81] duration metric: took 6.099138ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039194  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049651  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.049675  876065 pod_ready.go:81] duration metric: took 10.473938ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049685  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061928  876065 pod_ready.go:92] pod "kube-proxy-6hqk5" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.061971  876065 pod_ready.go:81] duration metric: took 12.277038ms waiting for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061984  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071422  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.071452  876065 pod_ready.go:81] duration metric: took 9.456301ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071465  876065 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:30.081625  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.082675  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.581547  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:30.955154  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.957772  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.454775  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.194959  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:36.195495  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.339391  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.340404  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.083295  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.584210  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.956659  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:38.696669  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.194485  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.838699  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.840605  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.081223  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.081468  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.454630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.455871  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:43.195172  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:45.195687  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.339878  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.838910  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.841677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.082382  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.582248  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.457525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.955133  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:47.695467  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.195263  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.339284  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.340315  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.082546  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.581238  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.955630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.454502  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.455395  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:52.694030  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:54.694593  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:56.695136  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.838685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.838864  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.581986  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.582037  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.582635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.955377  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.963166  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:01.195573  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.840578  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.338828  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.082323  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.582531  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.454214  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.454975  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:03.198457  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:05.694675  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.339632  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.340001  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.840358  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:07.082081  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:09.582483  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.455257  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.455373  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.457344  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.196641  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.693989  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.339845  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:13.839805  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.583615  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:14.083682  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.957092  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.456347  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.694792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.200049  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.339768  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:18.839853  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.583278  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:19.081994  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.954665  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.454724  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.697859  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.194201  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.194738  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.840457  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.339880  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:21.082759  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.581646  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.457299  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.954029  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.694448  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.696563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:25.342126  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:27.839304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.083724  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:28.582086  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.955572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.459642  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.194785  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.693765  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:30.339130  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:32.339361  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.083363  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.582213  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.955312  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.955576  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.694783  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:34.339538  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.839469  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.842444  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.081206  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.581263  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.457091  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.956262  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.195134  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:40.195875  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.343304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.839634  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.080021  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.081543  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.453768  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.455182  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.457368  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:42.694667  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.195018  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.197081  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:46.338815  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:48.339683  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.083139  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.582320  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.954718  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.455135  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:49.696028  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.194484  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.340708  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.845026  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.082635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.583485  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.455840  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.955079  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.194627  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.197158  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.338956  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.339983  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.081903  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.583102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.955380  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.956134  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.695165  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.196563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:59.340299  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.838688  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.839025  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:00.080983  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:02.582197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:04.583222  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.454473  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.455187  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.455628  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.694518  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.695324  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.839239  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.341567  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.081736  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.581889  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.954781  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.954913  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.194118  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.194688  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:12.195198  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.840317  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.582436  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.583580  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.955894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.459525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.195588  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.195922  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:15.339470  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:17.340059  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.081770  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.082006  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.954957  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.455211  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.695530  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.193801  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.839618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.839819  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:20.083348  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:22.581010  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.582114  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.958579  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.454848  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:23.196520  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:25.196779  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.339942  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.340928  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.841122  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.583453  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:29.082667  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.455784  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.954086  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:27.695279  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.194416  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.341608  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.343898  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.581417  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.583852  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.955148  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.455525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:32.693640  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:34.695191  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.194999  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.838294  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.838948  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:36.082181  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.582488  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.955108  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.454392  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:40.455291  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.195193  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.694849  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.839180  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.339359  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.081697  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:43.081876  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.455905  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.962584  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.194494  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:46.195239  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.840607  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.338846  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:45.582002  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.083197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.454539  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.455025  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.694661  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.695232  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.840392  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.580410  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.580961  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.581502  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:51.954903  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.454053  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:53.194450  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:55.196537  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.339997  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.839677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.080798  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.087078  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.454639  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:58.955200  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.696210  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:00.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:02.194961  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.339152  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.340037  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.838551  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.582808  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.084331  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.458365  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.955679  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.696770  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:07.195364  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:05.840151  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.340709  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.582153  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.083260  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.454599  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.458281  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.196674  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.696022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.839588  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.342479  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.583479  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:14.081451  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.954623  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.455233  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.147383  876220 pod_ready.go:81] duration metric: took 4m0.000589332s waiting for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	E1114 15:58:15.147416  876220 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:58:15.147446  876220 pod_ready.go:38] duration metric: took 4m11.626263996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:15.147477  876220 kubeadm.go:640] restartCluster took 4m32.524775831s
	W1114 15:58:15.147587  876220 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:58:15.147630  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:58:14.195824  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.696055  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.841115  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.341347  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.084839  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.582575  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.696792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:20.838749  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:22.840049  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.080598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.081173  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.694974  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:26.196317  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.340015  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.839312  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.081700  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.582564  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.582728  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.037182  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.889530708s)
	I1114 15:58:29.037253  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:29.052797  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:58:29.061624  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:58:29.070799  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:58:29.070848  876220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:58:29.303905  876220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:58:28.695122  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.696046  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.341383  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:32.341988  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:31.584191  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.082795  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:33.195568  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:35.695145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.839094  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.840873  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.086791  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:38.581233  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.234828  876220 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:58:40.234881  876220 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:58:40.234965  876220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:58:40.235127  876220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:58:40.235264  876220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:58:40.235361  876220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:58:40.237159  876220 out.go:204]   - Generating certificates and keys ...
	I1114 15:58:40.237276  876220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:58:40.237366  876220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:58:40.237511  876220 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:58:40.237608  876220 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:58:40.237697  876220 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:58:40.237791  876220 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:58:40.237883  876220 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:58:40.237975  876220 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:58:40.238066  876220 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:58:40.238161  876220 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:58:40.238213  876220 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:58:40.238283  876220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:58:40.238352  876220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:58:40.238422  876220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:58:40.238506  876220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:58:40.238582  876220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:58:40.238725  876220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:58:40.238816  876220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:58:40.240266  876220 out.go:204]   - Booting up control plane ...
	I1114 15:58:40.240404  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:58:40.240508  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:58:40.240593  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:58:40.240822  876220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:58:40.240958  876220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:58:40.241018  876220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:58:40.241226  876220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:58:40.241333  876220 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.509675 seconds
	I1114 15:58:40.241470  876220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:58:40.241658  876220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:58:40.241744  876220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:58:40.241979  876220 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-279880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:58:40.242054  876220 kubeadm.go:322] [bootstrap-token] Using token: 2hujph.0fcw82xd7gxidhsk
	I1114 15:58:40.243677  876220 out.go:204]   - Configuring RBAC rules ...
	I1114 15:58:40.243823  876220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:58:40.243942  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:58:40.244131  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:58:40.244252  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:58:40.244351  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:58:40.244464  876220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:58:40.244616  876220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:58:40.244673  876220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:58:40.244732  876220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:58:40.244762  876220 kubeadm.go:322] 
	I1114 15:58:40.244828  876220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:58:40.244835  876220 kubeadm.go:322] 
	I1114 15:58:40.244904  876220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:58:40.244913  876220 kubeadm.go:322] 
	I1114 15:58:40.244934  876220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:58:40.244982  876220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:58:40.245027  876220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:58:40.245033  876220 kubeadm.go:322] 
	I1114 15:58:40.245108  876220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:58:40.245128  876220 kubeadm.go:322] 
	I1114 15:58:40.245185  876220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:58:40.245195  876220 kubeadm.go:322] 
	I1114 15:58:40.245269  876220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:58:40.245376  876220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:58:40.245483  876220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:58:40.245493  876220 kubeadm.go:322] 
	I1114 15:58:40.245606  876220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:58:40.245700  876220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:58:40.245708  876220 kubeadm.go:322] 
	I1114 15:58:40.245828  876220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.245986  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:58:40.246023  876220 kubeadm.go:322] 	--control-plane 
	I1114 15:58:40.246036  876220 kubeadm.go:322] 
	I1114 15:58:40.246148  876220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:58:40.246158  876220 kubeadm.go:322] 
	I1114 15:58:40.246247  876220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.246364  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:58:40.246386  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:58:40.246394  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:58:40.248160  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:58:40.249669  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:58:40.299570  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:58:40.399662  876220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:58:40.399751  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=embed-certs-279880 minikube.k8s.io/updated_at=2023_11_14T15_58_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.399759  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.456044  876220 ops.go:34] apiserver oom_adj: -16
	I1114 15:58:40.674206  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.780887  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:37.695540  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.195681  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:39.338902  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.339264  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.339844  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.582722  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.082401  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.391744  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:41.892060  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.392311  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.892385  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.391523  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.892286  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.392103  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.891494  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:45.392324  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.695415  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.195275  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.842259  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.339758  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.582481  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.079990  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.891330  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.391723  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.892283  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.391436  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.891664  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.392116  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.892052  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.391957  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.892316  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:50.391756  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.696088  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.195252  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.195695  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.891614  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.391818  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.891371  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.391565  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.544346  876220 kubeadm.go:1081] duration metric: took 12.144659895s to wait for elevateKubeSystemPrivileges.
	I1114 15:58:52.544391  876220 kubeadm.go:406] StartCluster complete in 5m9.978264522s
	I1114 15:58:52.544428  876220 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.544541  876220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:58:52.547345  876220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.547635  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:58:52.547785  876220 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:58:52.547873  876220 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-279880"
	I1114 15:58:52.547886  876220 addons.go:69] Setting default-storageclass=true in profile "embed-certs-279880"
	I1114 15:58:52.547903  876220 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-279880"
	I1114 15:58:52.547907  876220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-279880"
	W1114 15:58:52.547915  876220 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:58:52.547951  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:58:52.547986  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548010  876220 addons.go:69] Setting metrics-server=true in profile "embed-certs-279880"
	I1114 15:58:52.548027  876220 addons.go:231] Setting addon metrics-server=true in "embed-certs-279880"
	W1114 15:58:52.548038  876220 addons.go:240] addon metrics-server should already be in state true
	I1114 15:58:52.548083  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548508  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548612  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548844  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.568396  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I1114 15:58:52.568429  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I1114 15:58:52.568402  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I1114 15:58:52.569005  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569019  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569009  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569581  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569612  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.569772  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569796  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.570042  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570183  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.570699  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.570718  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.570742  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.570723  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.571364  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.571943  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.571975  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.575936  876220 addons.go:231] Setting addon default-storageclass=true in "embed-certs-279880"
	W1114 15:58:52.575961  876220 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:58:52.575996  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.576368  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.576412  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.588007  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I1114 15:58:52.588767  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.589487  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.589505  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.589943  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.590164  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.591841  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I1114 15:58:52.592269  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.592610  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.594453  876220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:58:52.593100  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.594839  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I1114 15:58:52.595836  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:58:52.595856  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:58:52.595874  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.595879  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.596356  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.596654  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.596683  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.597179  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.597199  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.597596  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.598225  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.598250  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.598972  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599389  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.599412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599655  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.599823  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.599971  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.600085  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.601301  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.603202  876220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:58:52.604691  876220 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.604701  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:58:52.604714  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.607585  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.607911  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.607942  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.608138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.608309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.608450  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.608586  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.614716  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I1114 15:58:52.615047  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.615462  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.615503  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.615849  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.616006  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.617386  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.617630  876220 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.617647  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:58:52.617666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.620337  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620656  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.620700  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620951  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.621103  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.621252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.621374  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.636800  876220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-279880" context rescaled to 1 replicas
	I1114 15:58:52.636844  876220 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:58:52.638665  876220 out.go:177] * Verifying Kubernetes components...
	I1114 15:58:50.340524  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.341233  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.080611  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.081851  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.582577  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.640094  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:52.829938  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.840140  876220 node_ready.go:35] waiting up to 6m0s for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.840653  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:58:52.858164  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.877415  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:58:52.877448  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:58:52.900588  876220 node_ready.go:49] node "embed-certs-279880" has status "Ready":"True"
	I1114 15:58:52.900614  876220 node_ready.go:38] duration metric: took 60.432125ms waiting for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.900624  876220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:52.972955  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:53.009532  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:58:53.009564  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:58:53.064247  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:53.064283  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:58:53.168472  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:54.543952  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.713966912s)
	I1114 15:58:54.544016  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544029  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544312  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544332  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.544343  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544374  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544650  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544697  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.569577  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.728879408s)
	I1114 15:58:54.569603  876220 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 15:58:54.572090  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.572118  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.572396  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.572420  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063126  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.20491351s)
	I1114 15:58:55.063197  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063218  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063551  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063572  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063583  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063596  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063609  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.063888  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063910  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.228754  876220 pod_ready.go:102] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:55.671980  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.503435235s)
	I1114 15:58:55.672050  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672066  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672415  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672481  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672502  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672514  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.672777  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672795  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672807  876220 addons.go:467] Verifying addon metrics-server=true in "embed-certs-279880"
	I1114 15:58:55.674712  876220 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:58:55.676182  876220 addons.go:502] enable addons completed in 3.128402943s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:58:54.695084  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.696106  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.844023  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:57.338618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.660605  876220 pod_ready.go:92] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.660642  876220 pod_ready.go:81] duration metric: took 3.687643856s waiting for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.660659  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671773  876220 pod_ready.go:92] pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.671803  876220 pod_ready.go:81] duration metric: took 11.134131ms waiting for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671817  876220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679179  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.679212  876220 pod_ready.go:81] duration metric: took 7.385218ms waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679224  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691696  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.691721  876220 pod_ready.go:81] duration metric: took 12.488161ms waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691734  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704134  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.704153  876220 pod_ready.go:81] duration metric: took 12.411686ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704161  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950181  876220 pod_ready.go:92] pod "kube-proxy-qdppd" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:57.950213  876220 pod_ready.go:81] duration metric: took 1.246044532s waiting for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950226  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237122  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:58.237150  876220 pod_ready.go:81] duration metric: took 286.915812ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237158  876220 pod_ready.go:38] duration metric: took 5.336525686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:58.237177  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:58:58.237227  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:58:58.260115  876220 api_server.go:72] duration metric: took 5.623228202s to wait for apiserver process to appear ...
	I1114 15:58:58.260147  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:58:58.260169  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:58:58.265361  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:58:58.266889  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:58:58.266918  876220 api_server.go:131] duration metric: took 6.76351ms to wait for apiserver health ...
	I1114 15:58:58.266938  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:58:58.439329  876220 system_pods.go:59] 9 kube-system pods found
	I1114 15:58:58.439362  876220 system_pods.go:61] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.439367  876220 system_pods.go:61] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.439371  876220 system_pods.go:61] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.439375  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.439379  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.439383  876220 system_pods.go:61] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.439387  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.439395  876220 system_pods.go:61] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.439402  876220 system_pods.go:61] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.439412  876220 system_pods.go:74] duration metric: took 172.465662ms to wait for pod list to return data ...
	I1114 15:58:58.439426  876220 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:58:58.637240  876220 default_sa.go:45] found service account: "default"
	I1114 15:58:58.637269  876220 default_sa.go:55] duration metric: took 197.834816ms for default service account to be created ...
	I1114 15:58:58.637278  876220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:58:58.840945  876220 system_pods.go:86] 9 kube-system pods found
	I1114 15:58:58.840976  876220 system_pods.go:89] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.840984  876220 system_pods.go:89] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.840990  876220 system_pods.go:89] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.840996  876220 system_pods.go:89] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.841001  876220 system_pods.go:89] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.841008  876220 system_pods.go:89] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.841014  876220 system_pods.go:89] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.841024  876220 system_pods.go:89] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.841032  876220 system_pods.go:89] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.841046  876220 system_pods.go:126] duration metric: took 203.761925ms to wait for k8s-apps to be running ...
	I1114 15:58:58.841058  876220 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:58:58.841143  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:58.857376  876220 system_svc.go:56] duration metric: took 16.307402ms WaitForService to wait for kubelet.
	I1114 15:58:58.857414  876220 kubeadm.go:581] duration metric: took 6.220529321s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:58:58.857439  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:58:59.036083  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:58:59.036112  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:58:59.036123  876220 node_conditions.go:105] duration metric: took 178.67985ms to run NodePressure ...
	I1114 15:58:59.036136  876220 start.go:228] waiting for startup goroutines ...
	I1114 15:58:59.036142  876220 start.go:233] waiting for cluster config update ...
	I1114 15:58:59.036152  876220 start.go:242] writing updated cluster config ...
	I1114 15:58:59.036464  876220 ssh_runner.go:195] Run: rm -f paused
	I1114 15:58:59.092065  876220 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:58:59.093827  876220 out.go:177] * Done! kubectl is now configured to use "embed-certs-279880" cluster and "default" namespace by default
	I1114 15:58:57.082065  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.082525  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:58.696271  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.195205  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.339863  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.839918  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.582598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:02.796920  876668 pod_ready.go:81] duration metric: took 4m0.000259164s waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:02.796965  876668 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:02.796978  876668 pod_ready.go:38] duration metric: took 4m6.075965552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:02.796999  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:02.797042  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:02.797123  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:02.851170  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:02.851199  876668 cri.go:89] found id: ""
	I1114 15:59:02.851210  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:02.851271  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.857251  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:02.857323  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:02.904914  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:02.904939  876668 cri.go:89] found id: ""
	I1114 15:59:02.904947  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:02.904994  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.909276  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:02.909350  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:02.944708  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:02.944778  876668 cri.go:89] found id: ""
	I1114 15:59:02.944789  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:02.944856  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.949260  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:02.949334  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:02.986830  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:02.986858  876668 cri.go:89] found id: ""
	I1114 15:59:02.986868  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:02.986928  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.991432  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:02.991511  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:03.028072  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.028101  876668 cri.go:89] found id: ""
	I1114 15:59:03.028113  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:03.028177  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.032678  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:03.032771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:03.070651  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.070671  876668 cri.go:89] found id: ""
	I1114 15:59:03.070679  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:03.070727  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.075127  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:03.075192  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:03.117191  876668 cri.go:89] found id: ""
	I1114 15:59:03.117221  876668 logs.go:284] 0 containers: []
	W1114 15:59:03.117229  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:03.117235  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:03.117300  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:03.163227  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.163255  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:03.163260  876668 cri.go:89] found id: ""
	I1114 15:59:03.163269  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:03.163322  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.167410  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.171362  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:03.171389  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:03.330078  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:03.330113  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:03.372318  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:03.372349  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.414474  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:03.414506  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.471989  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:03.472025  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.516802  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:03.516834  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:03.532186  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:03.532218  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:03.987984  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:03.988029  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:04.045261  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:04.045305  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:04.095816  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:04.095853  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:04.148084  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:04.148132  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:04.200992  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:04.201039  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:04.239171  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:04.239207  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:03.695077  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.194941  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:04.339648  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.839045  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:08.841546  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.787847  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:06.808020  876668 api_server.go:72] duration metric: took 4m16.941929205s to wait for apiserver process to appear ...
	I1114 15:59:06.808052  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:06.808087  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:06.808146  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:06.849716  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:06.849747  876668 cri.go:89] found id: ""
	I1114 15:59:06.849758  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:06.849816  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.854025  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:06.854093  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:06.894331  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:06.894361  876668 cri.go:89] found id: ""
	I1114 15:59:06.894371  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:06.894430  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.899047  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:06.899137  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:06.947156  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:06.947194  876668 cri.go:89] found id: ""
	I1114 15:59:06.947206  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:06.947279  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.952972  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:06.953045  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:06.997872  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:06.997899  876668 cri.go:89] found id: ""
	I1114 15:59:06.997910  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:06.997972  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.002282  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:07.002362  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:07.041689  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.041722  876668 cri.go:89] found id: ""
	I1114 15:59:07.041734  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:07.041800  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.045730  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:07.045797  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:07.091996  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.092021  876668 cri.go:89] found id: ""
	I1114 15:59:07.092032  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:07.092094  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.100690  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:07.100771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:07.141635  876668 cri.go:89] found id: ""
	I1114 15:59:07.141670  876668 logs.go:284] 0 containers: []
	W1114 15:59:07.141681  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:07.141689  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:07.141750  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:07.184807  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:07.184839  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.184847  876668 cri.go:89] found id: ""
	I1114 15:59:07.184857  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:07.184920  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.189361  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.197666  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:07.197694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:07.243532  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:07.243568  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:07.284479  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:07.284520  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.326309  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:07.326341  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:07.794035  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:07.794077  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.836008  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:07.836050  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:07.886157  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:07.886192  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:07.930752  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:07.930795  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.983727  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:07.983765  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:08.024969  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:08.025000  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:08.079050  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:08.079090  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:08.093653  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:08.093691  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:08.228823  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:08.228864  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:08.196022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.196145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:12.196843  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:11.340269  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:13.840055  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.780836  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:59:10.793555  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:59:10.794839  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:59:10.794868  876668 api_server.go:131] duration metric: took 3.986808086s to wait for apiserver health ...
	I1114 15:59:10.794878  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:10.794907  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:10.794989  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:10.842028  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:10.842050  876668 cri.go:89] found id: ""
	I1114 15:59:10.842059  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:10.842113  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.846938  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:10.847030  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:10.893360  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:10.893386  876668 cri.go:89] found id: ""
	I1114 15:59:10.893394  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:10.893443  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.899601  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:10.899669  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:10.949519  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:10.949542  876668 cri.go:89] found id: ""
	I1114 15:59:10.949550  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:10.949602  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.953875  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:10.953936  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:10.994565  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:10.994595  876668 cri.go:89] found id: ""
	I1114 15:59:10.994605  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:10.994659  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.999120  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:10.999187  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:11.039364  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.039392  876668 cri.go:89] found id: ""
	I1114 15:59:11.039403  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:11.039509  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.044115  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:11.044174  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:11.088803  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.088835  876668 cri.go:89] found id: ""
	I1114 15:59:11.088846  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:11.088917  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.094005  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:11.094076  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:11.145247  876668 cri.go:89] found id: ""
	I1114 15:59:11.145276  876668 logs.go:284] 0 containers: []
	W1114 15:59:11.145285  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:11.145294  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:11.145355  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:11.188916  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.188950  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.188957  876668 cri.go:89] found id: ""
	I1114 15:59:11.188967  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:11.189029  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.195578  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.200146  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:11.200174  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:11.240413  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:11.240458  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.290614  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:11.290648  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:11.638700  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:11.638743  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:11.654234  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:11.654267  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.709147  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:11.709184  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:11.751661  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:11.751701  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.796993  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:11.797041  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.841478  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:11.841510  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:11.972862  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:11.972902  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:12.019217  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:12.019260  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:12.073396  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:12.073443  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:12.142653  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:12.142694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:14.704129  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:59:14.704159  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.704167  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.704173  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.704179  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.704184  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.704191  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.704200  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.704207  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.704217  876668 system_pods.go:74] duration metric: took 3.909331461s to wait for pod list to return data ...
	I1114 15:59:14.704231  876668 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:14.706920  876668 default_sa.go:45] found service account: "default"
	I1114 15:59:14.706944  876668 default_sa.go:55] duration metric: took 2.702527ms for default service account to be created ...
	I1114 15:59:14.706954  876668 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:14.714049  876668 system_pods.go:86] 8 kube-system pods found
	I1114 15:59:14.714080  876668 system_pods.go:89] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.714089  876668 system_pods.go:89] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.714096  876668 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.714101  876668 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.714106  876668 system_pods.go:89] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.714113  876668 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.714128  876668 system_pods.go:89] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.714142  876668 system_pods.go:89] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.714152  876668 system_pods.go:126] duration metric: took 7.191238ms to wait for k8s-apps to be running ...
	I1114 15:59:14.714174  876668 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:59:14.714231  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:14.734987  876668 system_svc.go:56] duration metric: took 20.804278ms WaitForService to wait for kubelet.
	I1114 15:59:14.735015  876668 kubeadm.go:581] duration metric: took 4m24.868931304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:59:14.735038  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:59:14.737844  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:59:14.737868  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:59:14.737878  876668 node_conditions.go:105] duration metric: took 2.834918ms to run NodePressure ...
	I1114 15:59:14.737889  876668 start.go:228] waiting for startup goroutines ...
	I1114 15:59:14.737895  876668 start.go:233] waiting for cluster config update ...
	I1114 15:59:14.737905  876668 start.go:242] writing updated cluster config ...
	I1114 15:59:14.738157  876668 ssh_runner.go:195] Run: rm -f paused
	I1114 15:59:14.791076  876668 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:59:14.793853  876668 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-529430" cluster and "default" namespace by default
	I1114 15:59:14.694842  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:15.887599  876396 pod_ready.go:81] duration metric: took 4m0.000892827s waiting for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:15.887641  876396 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:15.887664  876396 pod_ready.go:38] duration metric: took 4m1.199797165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:15.887694  876396 kubeadm.go:640] restartCluster took 5m7.501574769s
	W1114 15:59:15.887782  876396 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:15.887859  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:16.340114  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:18.340157  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:20.901839  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.013944828s)
	I1114 15:59:20.901933  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:20.915929  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:20.928081  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:20.937656  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:20.937756  876396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1114 15:59:20.998439  876396 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1114 15:59:20.998593  876396 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:21.145429  876396 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:21.145639  876396 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:21.145777  876396 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:21.387825  876396 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:21.388897  876396 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:21.396490  876396 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1114 15:59:21.518176  876396 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:21.520261  876396 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:21.520398  876396 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:21.520496  876396 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:21.520590  876396 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:21.520686  876396 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:21.520797  876396 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:21.520918  876396 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:21.521009  876396 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:21.521434  876396 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:21.521822  876396 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:21.522333  876396 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:21.522651  876396 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:21.522730  876396 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:21.707438  876396 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:21.890929  876396 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:22.058077  876396 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:22.234616  876396 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:22.235636  876396 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:22.237626  876396 out.go:204]   - Booting up control plane ...
	I1114 15:59:22.237743  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:22.241964  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:22.242976  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:22.244745  876396 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:22.248349  876396 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:20.341685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:22.838566  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:25.337887  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:27.341368  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.256998  876396 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005833 seconds
	I1114 15:59:32.257145  876396 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:32.272061  876396 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:32.797161  876396 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:32.797367  876396 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-842105 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 15:59:33.314721  876396 kubeadm.go:322] [bootstrap-token] Using token: 04dlot.9kpu87sb3ajm8dfs
	I1114 15:59:33.316454  876396 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:33.316628  876396 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:33.324455  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:33.328877  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:33.335460  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:33.339307  876396 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:33.422742  876396 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:33.757796  876396 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:33.759150  876396 kubeadm.go:322] 
	I1114 15:59:33.759248  876396 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:33.759281  876396 kubeadm.go:322] 
	I1114 15:59:33.759442  876396 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:33.759459  876396 kubeadm.go:322] 
	I1114 15:59:33.759495  876396 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:33.759577  876396 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:33.759647  876396 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:33.759657  876396 kubeadm.go:322] 
	I1114 15:59:33.759726  876396 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:33.759828  876396 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:33.759922  876396 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:33.759931  876396 kubeadm.go:322] 
	I1114 15:59:33.760050  876396 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1114 15:59:33.760143  876396 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:33.760154  876396 kubeadm.go:322] 
	I1114 15:59:33.760239  876396 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760360  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:33.760397  876396 kubeadm.go:322]     --control-plane 	  
	I1114 15:59:33.760408  876396 kubeadm.go:322] 
	I1114 15:59:33.760517  876396 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:33.760527  876396 kubeadm.go:322] 
	I1114 15:59:33.760624  876396 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760781  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:33.764918  876396 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:33.764993  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:59:33.765010  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:33.767708  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:29.839580  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.339612  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:33.072424  876065 pod_ready.go:81] duration metric: took 4m0.000921839s waiting for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:33.072553  876065 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:33.072606  876065 pod_ready.go:38] duration metric: took 4m10.602378093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:33.072664  876065 kubeadm.go:640] restartCluster took 4m30.632686786s
	W1114 15:59:33.072782  876065 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:33.073057  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:33.769398  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:33.781327  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:33.810672  876396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:33.810839  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:33.810927  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=old-k8s-version-842105 minikube.k8s.io/updated_at=2023_11_14T15_59_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.181391  876396 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:34.181528  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.301381  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.919870  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.419262  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.919637  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.419780  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.919453  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.420046  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.919605  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.419845  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.919474  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.419303  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.919616  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.419633  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.919220  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.419298  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.919396  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.420042  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.919886  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.419274  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.920217  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.419952  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.919511  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.419619  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.919762  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.420141  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.919676  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.261922  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.188828866s)
	I1114 15:59:47.262031  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:47.276268  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:47.285701  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:47.294481  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:47.294540  876065 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:59:47.348856  876065 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:59:47.348959  876065 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:47.530233  876065 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:47.530413  876065 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:47.530581  876065 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:47.784516  876065 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:47.420108  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.920005  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.419707  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.527158  876396 kubeadm.go:1081] duration metric: took 14.716377346s to wait for elevateKubeSystemPrivileges.
	I1114 15:59:48.527193  876396 kubeadm.go:406] StartCluster complete in 5m40.211957984s
	I1114 15:59:48.527213  876396 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.527323  876396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:59:48.529723  876396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.530058  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:59:48.530134  876396 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:59:48.530222  876396 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530248  876396 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-842105"
	W1114 15:59:48.530257  876396 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:59:48.530256  876396 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530285  876396 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-842105"
	W1114 15:59:48.530297  876396 addons.go:240] addon metrics-server should already be in state true
	I1114 15:59:48.530321  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530342  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530354  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:59:48.530429  876396 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530457  876396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-842105"
	I1114 15:59:48.530764  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530793  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530805  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530795  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530818  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530822  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.549568  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1114 15:59:48.549642  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I1114 15:59:48.550081  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550240  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550734  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550755  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.550866  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550887  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.551164  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551425  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551622  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.551766  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.551813  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.552539  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1114 15:59:48.553028  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.554044  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.554063  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.554522  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.555069  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555106  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.555404  876396 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-842105"
	W1114 15:59:48.555470  876396 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:59:48.555516  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.555924  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555961  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.576876  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I1114 15:59:48.576912  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I1114 15:59:48.576878  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1114 15:59:48.577223  876396 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-842105" context rescaled to 1 replicas
	I1114 15:59:48.577266  876396 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:59:48.579711  876396 out.go:177] * Verifying Kubernetes components...
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577672  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.581751  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:48.580402  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581791  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580422  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581852  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580432  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581919  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.582238  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582286  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582314  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582439  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.582735  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.582751  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.583264  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.584865  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.586792  876396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:59:48.585415  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.588364  876396 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.588378  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:59:48.588398  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.592854  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.594307  876396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:59:47.786524  876065 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:47.786668  876065 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:47.786744  876065 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:47.786843  876065 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:47.786912  876065 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:47.787108  876065 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:47.787698  876065 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:47.788301  876065 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:47.788930  876065 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:47.789533  876065 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:47.790115  876065 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:47.790449  876065 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:47.790523  876065 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:47.975724  876065 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:48.056071  876065 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:48.340177  876065 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:48.733230  876065 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:48.734350  876065 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:48.738369  876065 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:48.740013  876065 out.go:204]   - Booting up control plane ...
	I1114 15:59:48.740143  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:48.740271  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:48.743856  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:48.763450  876065 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:48.764688  876065 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:48.764768  876065 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:59:48.932286  876065 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:48.592918  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.593079  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.595739  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:59:48.595754  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:59:48.595776  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.595826  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.595852  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.596957  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.597212  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.599011  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599448  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.599710  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599755  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.599975  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.600142  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.600304  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.607351  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I1114 15:59:48.607929  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.608484  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.608509  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.608998  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.609237  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.610958  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.611196  876396 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.611210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:59:48.611228  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.613709  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614297  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.614322  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614366  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.614539  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.614631  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.614711  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.708399  876396 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.708481  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:59:48.715087  876396 node_ready.go:49] node "old-k8s-version-842105" has status "Ready":"True"
	I1114 15:59:48.715111  876396 node_ready.go:38] duration metric: took 6.675707ms waiting for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.715124  876396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718748  876396 pod_ready.go:38] duration metric: took 3.605786ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718790  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:48.718857  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:48.750191  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.773186  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:59:48.773210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:59:48.788782  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.847057  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:59:48.847090  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:59:48.905401  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:48.905442  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:59:48.986582  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:49.606449  876396 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1114 15:59:49.606451  876396 api_server.go:72] duration metric: took 1.029145444s to wait for apiserver process to appear ...
	I1114 15:59:49.606506  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:49.606530  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:59:49.709702  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.709732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.710100  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.710130  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.710144  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.710153  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.711953  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.711985  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.711994  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.755976  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:59:49.756696  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.756719  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.757036  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.757103  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.757121  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.757390  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:59:49.757410  876396 api_server.go:131] duration metric: took 150.89717ms to wait for apiserver health ...
	I1114 15:59:49.757447  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:49.763460  876396 system_pods.go:59] 2 kube-system pods found
	I1114 15:59:49.763487  876396 system_pods.go:61] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.763497  876396 system_pods.go:61] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.763509  876396 system_pods.go:74] duration metric: took 6.051168ms to wait for pod list to return data ...
	I1114 15:59:49.763518  876396 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:49.776313  876396 default_sa.go:45] found service account: "default"
	I1114 15:59:49.776341  876396 default_sa.go:55] duration metric: took 12.814566ms for default service account to be created ...
	I1114 15:59:49.776351  876396 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:49.782462  876396 system_pods.go:86] 2 kube-system pods found
	I1114 15:59:49.782502  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.782518  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.782544  876396 retry.go:31] will retry after 311.640315ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.157150  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368304542s)
	I1114 15:59:50.157269  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157286  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.157688  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.157711  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.157730  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157743  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.158219  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.158270  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.169219  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.169264  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.169275  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.169282  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending
	I1114 15:59:50.169304  876396 retry.go:31] will retry after 335.621385ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.357400  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.370764048s)
	I1114 15:59:50.357474  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.357494  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.359782  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.359789  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.359811  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.359829  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.359840  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.360228  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.360264  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.360285  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.360333  876396 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-842105"
	I1114 15:59:50.362545  876396 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:59:50.364302  876396 addons.go:502] enable addons completed in 1.834168315s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:59:50.616547  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.616597  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.616608  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.616623  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.616645  876396 retry.go:31] will retry after 349.737645ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.971245  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.971286  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.971298  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.971312  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.971333  876396 retry.go:31] will retry after 562.981893ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:51.541777  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:51.541822  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:51.541849  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:51.541862  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:51.541870  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:51.541892  876396 retry.go:31] will retry after 617.692214ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:52.166157  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.166192  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.166199  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.166207  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.166211  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.166227  876396 retry.go:31] will retry after 671.968353ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:52.844235  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.844269  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.844276  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.844285  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.844290  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.844309  876396 retry.go:31] will retry after 955.353451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:53.814593  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:53.814626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:53.814636  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:53.814651  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:53.814661  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:53.814680  876396 retry.go:31] will retry after 1.306938168s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:55.127401  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:55.127436  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:55.127445  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:55.127457  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:55.127465  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:55.127488  876396 retry.go:31] will retry after 1.627615182s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.759304  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:56.759339  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:56.759345  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:56.759353  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:56.759358  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:56.759373  876396 retry.go:31] will retry after 2.046606031s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.936792  876065 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004387 seconds
	I1114 15:59:56.936992  876065 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:56.965969  876065 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:57.504894  876065 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:57.505171  876065 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-490998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:59:58.021451  876065 kubeadm.go:322] [bootstrap-token] Using token: 3x3ma3.qtutj9fi1nmgzc3r
	I1114 15:59:58.023064  876065 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:58.023220  876065 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:58.028334  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:59:58.039638  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:58.043783  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:58.048814  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:58.061419  876065 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:58.075996  876065 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:59:58.328245  876065 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:58.435170  876065 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:58.436684  876065 kubeadm.go:322] 
	I1114 15:59:58.436781  876065 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:58.436796  876065 kubeadm.go:322] 
	I1114 15:59:58.436889  876065 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:58.436932  876065 kubeadm.go:322] 
	I1114 15:59:58.436988  876065 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:58.437091  876065 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:58.437155  876065 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:58.437176  876065 kubeadm.go:322] 
	I1114 15:59:58.437231  876065 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:59:58.437239  876065 kubeadm.go:322] 
	I1114 15:59:58.437281  876065 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:59:58.437288  876065 kubeadm.go:322] 
	I1114 15:59:58.437353  876065 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:58.437449  876065 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:58.437564  876065 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:58.437574  876065 kubeadm.go:322] 
	I1114 15:59:58.437684  876065 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:59:58.437800  876065 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:58.437816  876065 kubeadm.go:322] 
	I1114 15:59:58.437937  876065 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438087  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:58.438116  876065 kubeadm.go:322] 	--control-plane 
	I1114 15:59:58.438124  876065 kubeadm.go:322] 
	I1114 15:59:58.438194  876065 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:58.438202  876065 kubeadm.go:322] 
	I1114 15:59:58.438267  876065 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438355  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:58.442217  876065 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:58.442251  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:59:58.442263  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:58.444078  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:58.445560  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:58.467849  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:58.501795  876065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:58.501941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.501965  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=no-preload-490998 minikube.k8s.io/updated_at=2023_11_14T15_59_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.557314  876065 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:58.891105  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:59.006867  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.811870  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:58.811905  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:58.811912  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:58.811920  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:58.811924  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:58.811939  876396 retry.go:31] will retry after 2.166453413s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:00.984597  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:00.984626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:00.984632  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:00.984638  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:00.984643  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:00.984661  876396 retry.go:31] will retry after 2.339496963s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:59.620843  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.120941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.621244  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.121507  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.621512  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.121367  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.621449  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.120920  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.620857  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.329034  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:03.329061  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:03.329067  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:03.329074  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:03.329078  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:03.329097  876396 retry.go:31] will retry after 3.593700907s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:06.929268  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:06.929308  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:06.929316  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:06.929327  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:06.929335  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:06.929357  876396 retry.go:31] will retry after 4.929780079s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:04.121245  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:04.620976  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.120894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.621609  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.121209  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.621322  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.121613  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.620968  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.121482  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.621166  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.121032  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.620894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.120992  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.621306  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.121427  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.299388  876065 kubeadm.go:1081] duration metric: took 12.79751335s to wait for elevateKubeSystemPrivileges.
	I1114 16:00:11.299429  876065 kubeadm.go:406] StartCluster complete in 5m8.910317864s
	I1114 16:00:11.299489  876065 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.299594  876065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:00:11.301841  876065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.302097  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 16:00:11.302144  876065 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 16:00:11.302251  876065 addons.go:69] Setting storage-provisioner=true in profile "no-preload-490998"
	I1114 16:00:11.302268  876065 addons.go:69] Setting default-storageclass=true in profile "no-preload-490998"
	I1114 16:00:11.302287  876065 addons.go:231] Setting addon storage-provisioner=true in "no-preload-490998"
	W1114 16:00:11.302301  876065 addons.go:240] addon storage-provisioner should already be in state true
	I1114 16:00:11.302296  876065 addons.go:69] Setting metrics-server=true in profile "no-preload-490998"
	I1114 16:00:11.302327  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:00:11.302346  876065 addons.go:231] Setting addon metrics-server=true in "no-preload-490998"
	W1114 16:00:11.302360  876065 addons.go:240] addon metrics-server should already be in state true
	I1114 16:00:11.302361  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302408  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302287  876065 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-490998"
	I1114 16:00:11.302858  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302926  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302942  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302956  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302863  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.303043  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.323089  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I1114 16:00:11.323101  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1114 16:00:11.323750  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.323807  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.324339  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324362  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324554  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324577  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324806  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I1114 16:00:11.325059  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325120  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325172  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.325617  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.325652  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326120  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.326138  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.326359  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.326398  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326499  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.326665  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.330090  876065 addons.go:231] Setting addon default-storageclass=true in "no-preload-490998"
	W1114 16:00:11.330115  876065 addons.go:240] addon default-storageclass should already be in state true
	I1114 16:00:11.330144  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.330381  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.330415  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.347198  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I1114 16:00:11.347385  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I1114 16:00:11.347562  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I1114 16:00:11.347721  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347785  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347897  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.348216  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348232  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348346  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348358  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348366  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348370  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348593  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348729  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348878  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348947  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349143  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349223  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.349270  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.351308  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.353786  876065 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 16:00:11.352409  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.355097  876065 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.355119  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 16:00:11.355141  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.356613  876065 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 16:00:11.357928  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 16:00:11.357949  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 16:00:11.357969  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.358548  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359421  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.359450  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359652  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.359922  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.360221  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.360379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.362075  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362508  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.362532  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362831  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.363041  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.363234  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.363390  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.379820  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I1114 16:00:11.380297  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.380905  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.380935  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.381326  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.381573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.383433  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.383722  876065 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.383741  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 16:00:11.383762  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.386432  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.386813  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.386845  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.387062  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.387311  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.387490  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.387661  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.450418  876065 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-490998" context rescaled to 1 replicas
	I1114 16:00:11.450472  876065 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:00:11.452499  876065 out.go:177] * Verifying Kubernetes components...
	I1114 16:00:11.864833  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:11.864867  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:11.864875  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:11.864884  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:11.864891  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:11.864918  876396 retry.go:31] will retry after 6.141765036s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:11.454141  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:11.560863  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.582400  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 16:00:11.582423  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 16:00:11.596910  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.626625  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 16:00:11.626652  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 16:00:11.634166  876065 node_ready.go:35] waiting up to 6m0s for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.634309  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 16:00:11.706391  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.706421  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 16:00:11.737914  876065 node_ready.go:49] node "no-preload-490998" has status "Ready":"True"
	I1114 16:00:11.737955  876065 node_ready.go:38] duration metric: took 103.74965ms waiting for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.737969  876065 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:11.795522  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.910850  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:13.838426  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.277507449s)
	I1114 16:00:13.838488  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838481  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.241527225s)
	I1114 16:00:13.838530  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838555  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838501  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838599  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.204200469s)
	I1114 16:00:13.838636  876065 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1114 16:00:13.838941  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.838992  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839001  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839008  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839016  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.839032  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839047  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839057  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839066  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841315  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841335  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.841398  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841418  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855083  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.059516605s)
	I1114 16:00:13.855146  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855169  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855524  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855572  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855588  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855600  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855612  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855921  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855949  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855961  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855979  876065 addons.go:467] Verifying addon metrics-server=true in "no-preload-490998"
	I1114 16:00:13.864145  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.864168  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.864444  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.864480  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.864491  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.867459  876065 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1114 16:00:13.868861  876065 addons.go:502] enable addons completed in 2.566733189s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1114 16:00:14.067240  876065 pod_ready.go:97] error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067289  876065 pod_ready.go:81] duration metric: took 2.15639988s waiting for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	E1114 16:00:14.067306  876065 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067315  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140385  876065 pod_ready.go:92] pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.140412  876065 pod_ready.go:81] duration metric: took 2.07308909s waiting for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140422  876065 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145818  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.145837  876065 pod_ready.go:81] duration metric: took 5.409163ms waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145845  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150850  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.150868  876065 pod_ready.go:81] duration metric: took 5.017013ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150877  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155895  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.155919  876065 pod_ready.go:81] duration metric: took 5.034132ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155931  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254239  876065 pod_ready.go:92] pod "kube-proxy-9nc8j" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.254270  876065 pod_ready.go:81] duration metric: took 98.331009ms waiting for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254282  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653014  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.653041  876065 pod_ready.go:81] duration metric: took 398.751468ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653049  876065 pod_ready.go:38] duration metric: took 4.915065516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:16.653066  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 16:00:16.653118  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 16:00:16.670396  876065 api_server.go:72] duration metric: took 5.219889322s to wait for apiserver process to appear ...
	I1114 16:00:16.670430  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 16:00:16.670450  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 16:00:16.675936  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 16:00:16.677570  876065 api_server.go:141] control plane version: v1.28.3
	I1114 16:00:16.677592  876065 api_server.go:131] duration metric: took 7.155742ms to wait for apiserver health ...
	I1114 16:00:16.677601  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 16:00:16.858468  876065 system_pods.go:59] 8 kube-system pods found
	I1114 16:00:16.858500  876065 system_pods.go:61] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:16.858505  876065 system_pods.go:61] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:16.858509  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:16.858514  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:16.858518  876065 system_pods.go:61] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:16.858522  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:16.858529  876065 system_pods.go:61] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:16.858534  876065 system_pods.go:61] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:16.858543  876065 system_pods.go:74] duration metric: took 180.935707ms to wait for pod list to return data ...
	I1114 16:00:16.858551  876065 default_sa.go:34] waiting for default service account to be created ...
	I1114 16:00:17.053423  876065 default_sa.go:45] found service account: "default"
	I1114 16:00:17.053478  876065 default_sa.go:55] duration metric: took 194.91891ms for default service account to be created ...
	I1114 16:00:17.053491  876065 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 16:00:17.256504  876065 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:17.256539  876065 system_pods.go:89] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:17.256547  876065 system_pods.go:89] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:17.256554  876065 system_pods.go:89] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:17.256561  876065 system_pods.go:89] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:17.256567  876065 system_pods.go:89] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:17.256572  876065 system_pods.go:89] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:17.256582  876065 system_pods.go:89] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:17.256589  876065 system_pods.go:89] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:17.256602  876065 system_pods.go:126] duration metric: took 203.104027ms to wait for k8s-apps to be running ...
	I1114 16:00:17.256615  876065 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:17.256682  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:17.273098  876065 system_svc.go:56] duration metric: took 16.455935ms WaitForService to wait for kubelet.
	I1114 16:00:17.273135  876065 kubeadm.go:581] duration metric: took 5.822636312s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:17.273162  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:17.453601  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:17.453635  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:17.453675  876065 node_conditions.go:105] duration metric: took 180.505934ms to run NodePressure ...
	I1114 16:00:17.453692  876065 start.go:228] waiting for startup goroutines ...
	I1114 16:00:17.453706  876065 start.go:233] waiting for cluster config update ...
	I1114 16:00:17.453748  876065 start.go:242] writing updated cluster config ...
	I1114 16:00:17.454022  876065 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:17.505999  876065 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 16:00:17.509514  876065 out.go:177] * Done! kubectl is now configured to use "no-preload-490998" cluster and "default" namespace by default
	I1114 16:00:18.012940  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:18.012980  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:18.012988  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:18.012998  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:18.013007  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:18.013032  876396 retry.go:31] will retry after 7.087138718s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:25.105773  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:25.105804  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:25.105809  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:25.105817  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:25.105822  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:25.105842  876396 retry.go:31] will retry after 8.539395127s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:33.651084  876396 system_pods.go:86] 6 kube-system pods found
	I1114 16:00:33.651116  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:33.651121  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:33.651125  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:33.651129  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:33.651136  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:33.651141  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:33.651159  876396 retry.go:31] will retry after 10.428154724s: missing components: etcd, kube-apiserver
	I1114 16:00:44.086463  876396 system_pods.go:86] 7 kube-system pods found
	I1114 16:00:44.086496  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:44.086501  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:44.086506  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:44.086511  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:44.086515  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:44.086522  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:44.086527  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:44.086546  876396 retry.go:31] will retry after 10.535877375s: missing components: kube-apiserver
	I1114 16:00:54.631194  876396 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:54.631230  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:54.631237  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:54.631244  876396 system_pods.go:89] "kube-apiserver-old-k8s-version-842105" [3035c074-63ca-4b23-a375-415210397d17] Running
	I1114 16:00:54.631252  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:54.631259  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:54.631265  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:54.631275  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:54.631291  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:54.631304  876396 system_pods.go:126] duration metric: took 1m4.854946282s to wait for k8s-apps to be running ...
	I1114 16:00:54.631317  876396 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:54.631470  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:54.648616  876396 system_svc.go:56] duration metric: took 17.286024ms WaitForService to wait for kubelet.
	I1114 16:00:54.648650  876396 kubeadm.go:581] duration metric: took 1m6.071350783s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:54.648677  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:54.652020  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:54.652055  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:54.652069  876396 node_conditions.go:105] duration metric: took 3.385579ms to run NodePressure ...
	I1114 16:00:54.652085  876396 start.go:228] waiting for startup goroutines ...
	I1114 16:00:54.652093  876396 start.go:233] waiting for cluster config update ...
	I1114 16:00:54.652106  876396 start.go:242] writing updated cluster config ...
	I1114 16:00:54.652418  876396 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:54.706394  876396 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1114 16:00:54.708374  876396 out.go:177] 
	W1114 16:00:54.709776  876396 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1114 16:00:54.711177  876396 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1114 16:00:54.712775  876396 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-842105" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:54:11 UTC, ends at Tue 2023-11-14 16:08:16 UTC. --
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.507364753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978096507349873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=da75992e-84d0-4d3f-9056-9a2167fc225e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.507923214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ddda193-2288-4911-8bb0-8a6696c52c20 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.508021401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ddda193-2288-4911-8bb0-8a6696c52c20 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.508360569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977318370780888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e03edb96781074fb7437d6279e2de257cba318958364f6cff5688696ad114e6,PodSandboxId:f6c23dac7d3b539a10e7f075c4af5bb6632e916e274c38d274bac1737d740161,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1699977297013583666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e,},Annotations:map[string]string{io.kubernetes.container.hash: ad6d4c58,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a,PodSandboxId:b61185af9c4f3663a607c8a3bbd66bb055f012e4a6bd4d54f102bb9cf32fd14f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977295447549877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b8szg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac852af7-15e4-4112-9dff-c76da29439af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c7b1ae5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864,PodSandboxId:45952e1a5bc402cb6a7ef0d566033febe4f1a3bf1bbadeb93044439cef8ca6ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977288012549548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpchs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
53e58226-44f2-4482-a4f4-1628cbcad8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 152b5fb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699977287959500182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07,PodSandboxId:1974315b49394011d7934c5eb5ca2c5dd6a777e1d044ee9ead80a935696c9b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977281676693890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d2cc7dda878aa2753319688d2bf78a,},An
notations:map[string]string{io.kubernetes.container.hash: ae9d5c97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156,PodSandboxId:3bc7b2a145834917cf8c25d33a6b9a014b058866ea232f1f659c5ec90e38dd7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977281385901645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b739f850bf9dad80e8b8d3256c0ecd9,},An
notations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5,PodSandboxId:7f3f711eb9f7b79b3e7ca1069c7b55a7b394dac80051fc747809641dc09591a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977281420435617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad1d56d052707c4aeec01f950aca9707,},An
notations:map[string]string{io.kubernetes.container.hash: 8b932893,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3,PodSandboxId:8fc4ff502e05c37f0729069be2e23be14d70c5caedd91de4f04293c30056f729,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977281430792299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
96fe7c93be346ca7b1a5a5639d7a371,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ddda193-2288-4911-8bb0-8a6696c52c20 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.557441624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d68b8bc4-54f1-4a75-86f7-4f3a1f19b340 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.557551358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d68b8bc4-54f1-4a75-86f7-4f3a1f19b340 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.558880076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=916bff2d-d743-4f65-a37b-b85654fea437 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.559692651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978096559662905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=916bff2d-d743-4f65-a37b-b85654fea437 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.560693432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=942d2c31-55a4-4405-9261-961405199d9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.560835596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=942d2c31-55a4-4405-9261-961405199d9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.561150889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977318370780888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e03edb96781074fb7437d6279e2de257cba318958364f6cff5688696ad114e6,PodSandboxId:f6c23dac7d3b539a10e7f075c4af5bb6632e916e274c38d274bac1737d740161,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1699977297013583666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e,},Annotations:map[string]string{io.kubernetes.container.hash: ad6d4c58,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a,PodSandboxId:b61185af9c4f3663a607c8a3bbd66bb055f012e4a6bd4d54f102bb9cf32fd14f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977295447549877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b8szg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac852af7-15e4-4112-9dff-c76da29439af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c7b1ae5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864,PodSandboxId:45952e1a5bc402cb6a7ef0d566033febe4f1a3bf1bbadeb93044439cef8ca6ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977288012549548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpchs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
53e58226-44f2-4482-a4f4-1628cbcad8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 152b5fb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699977287959500182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07,PodSandboxId:1974315b49394011d7934c5eb5ca2c5dd6a777e1d044ee9ead80a935696c9b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977281676693890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d2cc7dda878aa2753319688d2bf78a,},An
notations:map[string]string{io.kubernetes.container.hash: ae9d5c97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156,PodSandboxId:3bc7b2a145834917cf8c25d33a6b9a014b058866ea232f1f659c5ec90e38dd7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977281385901645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b739f850bf9dad80e8b8d3256c0ecd9,},An
notations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5,PodSandboxId:7f3f711eb9f7b79b3e7ca1069c7b55a7b394dac80051fc747809641dc09591a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977281420435617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad1d56d052707c4aeec01f950aca9707,},An
notations:map[string]string{io.kubernetes.container.hash: 8b932893,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3,PodSandboxId:8fc4ff502e05c37f0729069be2e23be14d70c5caedd91de4f04293c30056f729,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977281430792299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
96fe7c93be346ca7b1a5a5639d7a371,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=942d2c31-55a4-4405-9261-961405199d9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.605602480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7eec5673-5644-4b6e-9ccf-3ff3a3c33b24 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.605743326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7eec5673-5644-4b6e-9ccf-3ff3a3c33b24 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.606626414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a6b37735-4645-4482-bdf9-31938de374a6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.607040245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978096607021648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a6b37735-4645-4482-bdf9-31938de374a6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.607660709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=856b89fa-906e-4ddf-ab53-f7b4a76555ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.607743620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=856b89fa-906e-4ddf-ab53-f7b4a76555ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.607965892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977318370780888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e03edb96781074fb7437d6279e2de257cba318958364f6cff5688696ad114e6,PodSandboxId:f6c23dac7d3b539a10e7f075c4af5bb6632e916e274c38d274bac1737d740161,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1699977297013583666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e,},Annotations:map[string]string{io.kubernetes.container.hash: ad6d4c58,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a,PodSandboxId:b61185af9c4f3663a607c8a3bbd66bb055f012e4a6bd4d54f102bb9cf32fd14f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977295447549877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b8szg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac852af7-15e4-4112-9dff-c76da29439af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c7b1ae5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864,PodSandboxId:45952e1a5bc402cb6a7ef0d566033febe4f1a3bf1bbadeb93044439cef8ca6ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977288012549548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpchs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
53e58226-44f2-4482-a4f4-1628cbcad8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 152b5fb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699977287959500182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07,PodSandboxId:1974315b49394011d7934c5eb5ca2c5dd6a777e1d044ee9ead80a935696c9b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977281676693890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d2cc7dda878aa2753319688d2bf78a,},An
notations:map[string]string{io.kubernetes.container.hash: ae9d5c97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156,PodSandboxId:3bc7b2a145834917cf8c25d33a6b9a014b058866ea232f1f659c5ec90e38dd7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977281385901645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b739f850bf9dad80e8b8d3256c0ecd9,},An
notations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5,PodSandboxId:7f3f711eb9f7b79b3e7ca1069c7b55a7b394dac80051fc747809641dc09591a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977281420435617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad1d56d052707c4aeec01f950aca9707,},An
notations:map[string]string{io.kubernetes.container.hash: 8b932893,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3,PodSandboxId:8fc4ff502e05c37f0729069be2e23be14d70c5caedd91de4f04293c30056f729,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977281430792299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
96fe7c93be346ca7b1a5a5639d7a371,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=856b89fa-906e-4ddf-ab53-f7b4a76555ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.647396169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3dd77a0e-356c-4f7e-b0bd-b841a11172e1 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.647476501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3dd77a0e-356c-4f7e-b0bd-b841a11172e1 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.648999571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3aa33261-c62a-4a1d-ab65-a0350a5392b1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.649550260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978096649534924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3aa33261-c62a-4a1d-ab65-a0350a5392b1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.649973790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ef16e48d-5c59-4e2d-affd-e1da4edc6112 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.650046189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef16e48d-5c59-4e2d-affd-e1da4edc6112 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:08:16 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:08:16.650373747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977318370780888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e03edb96781074fb7437d6279e2de257cba318958364f6cff5688696ad114e6,PodSandboxId:f6c23dac7d3b539a10e7f075c4af5bb6632e916e274c38d274bac1737d740161,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1699977297013583666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e,},Annotations:map[string]string{io.kubernetes.container.hash: ad6d4c58,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a,PodSandboxId:b61185af9c4f3663a607c8a3bbd66bb055f012e4a6bd4d54f102bb9cf32fd14f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977295447549877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b8szg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac852af7-15e4-4112-9dff-c76da29439af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c7b1ae5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864,PodSandboxId:45952e1a5bc402cb6a7ef0d566033febe4f1a3bf1bbadeb93044439cef8ca6ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977288012549548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpchs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
53e58226-44f2-4482-a4f4-1628cbcad8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 152b5fb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699977287959500182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07,PodSandboxId:1974315b49394011d7934c5eb5ca2c5dd6a777e1d044ee9ead80a935696c9b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977281676693890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d2cc7dda878aa2753319688d2bf78a,},An
notations:map[string]string{io.kubernetes.container.hash: ae9d5c97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156,PodSandboxId:3bc7b2a145834917cf8c25d33a6b9a014b058866ea232f1f659c5ec90e38dd7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977281385901645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b739f850bf9dad80e8b8d3256c0ecd9,},An
notations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5,PodSandboxId:7f3f711eb9f7b79b3e7ca1069c7b55a7b394dac80051fc747809641dc09591a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977281420435617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad1d56d052707c4aeec01f950aca9707,},An
notations:map[string]string{io.kubernetes.container.hash: 8b932893,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3,PodSandboxId:8fc4ff502e05c37f0729069be2e23be14d70c5caedd91de4f04293c30056f729,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977281430792299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
96fe7c93be346ca7b1a5a5639d7a371,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef16e48d-5c59-4e2d-affd-e1da4edc6112 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	19e99b311805a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   07d79896994bb       storage-provisioner
	7e03edb967810       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f6c23dac7d3b5       busybox
	335b691953328       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   b61185af9c4f3       coredns-5dd5756b68-b8szg
	a9e10dc7650db       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      13 minutes ago      Running             kube-proxy                1                   45952e1a5bc40       kube-proxy-zpchs
	251b882e2626a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   07d79896994bb       storage-provisioner
	ab4ac318c279a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   1974315b49394       etcd-default-k8s-diff-port-529430
	96d5f7a9c1434       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      13 minutes ago      Running             kube-controller-manager   1                   8fc4ff502e05c       kube-controller-manager-default-k8s-diff-port-529430
	c8ca3bf950b59       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      13 minutes ago      Running             kube-apiserver            1                   7f3f711eb9f7b       kube-apiserver-default-k8s-diff-port-529430
	bde54fa8d8b9d       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      13 minutes ago      Running             kube-scheduler            1                   3bc7b2a145834       kube-scheduler-default-k8s-diff-port-529430
	
	* 
	* ==> coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44655 - 3978 "HINFO IN 8021990947516006082.6706459765484640430. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013127245s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-529430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-529430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=default-k8s-diff-port-529430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_46_13_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:46:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-529430
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 16:08:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 16:05:29 +0000   Tue, 14 Nov 2023 15:46:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 16:05:29 +0000   Tue, 14 Nov 2023 15:46:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 16:05:29 +0000   Tue, 14 Nov 2023 15:46:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 16:05:29 +0000   Tue, 14 Nov 2023 15:54:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.196
	  Hostname:    default-k8s-diff-port-529430
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a20cb6a3ff3846808fbb02ac20cde918
	  System UUID:                a20cb6a3-ff38-4680-8fbb-02ac20cde918
	  Boot ID:                    4a895212-5e91-4626-b198-6d476df0a51a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-b8szg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-529430                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-529430             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-529430    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-zpchs                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-529430             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-ss2ks                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node default-k8s-diff-port-529430 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-529430 event: Registered Node default-k8s-diff-port-529430 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-529430 event: Registered Node default-k8s-diff-port-529430 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov14 15:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.078958] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.792531] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.615335] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154088] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.506850] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.330399] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.123913] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.184703] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.150943] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.253952] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +17.937293] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +15.082527] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] <==
	* {"level":"info","ts":"2023-11-14T15:54:44.484567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46c40e62c30432f became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:54:44.484615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46c40e62c30432f received MsgPreVoteResp from 46c40e62c30432f at term 2"}
	{"level":"info","ts":"2023-11-14T15:54:44.484649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46c40e62c30432f became candidate at term 3"}
	{"level":"info","ts":"2023-11-14T15:54:44.484673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46c40e62c30432f received MsgVoteResp from 46c40e62c30432f at term 3"}
	{"level":"info","ts":"2023-11-14T15:54:44.484701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46c40e62c30432f became leader at term 3"}
	{"level":"info","ts":"2023-11-14T15:54:44.484726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46c40e62c30432f elected leader 46c40e62c30432f at term 3"}
	{"level":"info","ts":"2023-11-14T15:54:44.486332Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"46c40e62c30432f","local-member-attributes":"{Name:default-k8s-diff-port-529430 ClientURLs:[https://192.168.61.196:2379]}","request-path":"/0/members/46c40e62c30432f/attributes","cluster-id":"c625d7fc95f1345b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:54:44.48656Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:54:44.487572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:54:44.487709Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:54:44.488608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.196:2379"}
	{"level":"info","ts":"2023-11-14T15:54:44.490773Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:54:44.490821Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:54:51.154303Z","caller":"traceutil/trace.go:171","msg":"trace[16869641] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"336.756548ms","start":"2023-11-14T15:54:50.81753Z","end":"2023-11-14T15:54:51.154286Z","steps":["trace[16869641] 'process raft request'  (duration: 335.954847ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T15:54:51.154743Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T15:54:50.817516Z","time spent":"336.864339ms","remote":"127.0.0.1:36828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":767,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17978855fce967dd\" mod_revision:542 > success:<request_put:<key:\"/registry/events/default/busybox.17978855fce967dd\" value_size:700 lease:4841241843654433660 >> failure:<request_range:<key:\"/registry/events/default/busybox.17978855fce967dd\" > >"}
	{"level":"info","ts":"2023-11-14T15:54:58.973809Z","caller":"traceutil/trace.go:171","msg":"trace[62756389] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"172.508662ms","start":"2023-11-14T15:54:58.801286Z","end":"2023-11-14T15:54:58.973795Z","steps":["trace[62756389] 'process raft request'  (duration: 172.373551ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T15:54:58.975577Z","caller":"traceutil/trace.go:171","msg":"trace[553473316] linearizableReadLoop","detail":"{readStateIndex:610; appliedIndex:609; }","duration":"152.241199ms","start":"2023-11-14T15:54:58.823326Z","end":"2023-11-14T15:54:58.975567Z","steps":["trace[553473316] 'read index received'  (duration: 150.625564ms)","trace[553473316] 'applied index is now lower than readState.Index'  (duration: 1.615195ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-14T15:54:58.97579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.230746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-11-14T15:54:58.975971Z","caller":"traceutil/trace.go:171","msg":"trace[2116139502] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:570; }","duration":"102.421032ms","start":"2023-11-14T15:54:58.873535Z","end":"2023-11-14T15:54:58.975956Z","steps":["trace[2116139502] 'agreement among raft nodes before linearized reading'  (duration: 102.196152ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T15:54:58.976058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.73489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-11-14T15:54:58.976696Z","caller":"traceutil/trace.go:171","msg":"trace[1370932989] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:570; }","duration":"153.34086ms","start":"2023-11-14T15:54:58.823308Z","end":"2023-11-14T15:54:58.976649Z","steps":["trace[1370932989] 'agreement among raft nodes before linearized reading'  (duration: 152.658321ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T15:54:58.975898Z","caller":"traceutil/trace.go:171","msg":"trace[636378504] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"173.597694ms","start":"2023-11-14T15:54:58.802292Z","end":"2023-11-14T15:54:58.97589Z","steps":["trace[636378504] 'process raft request'  (duration: 173.13509ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T16:04:44.530149Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2023-11-14T16:04:44.533021Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":825,"took":"2.431662ms","hash":2229308458}
	{"level":"info","ts":"2023-11-14T16:04:44.533109Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2229308458,"revision":825,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  16:08:17 up 14 min,  0 users,  load average: 0.11, 0.31, 0.26
	Linux default-k8s-diff-port-529430 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] <==
	* I1114 16:04:46.407093       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:04:47.406801       1 handler_proxy.go:93] no RequestInfo found in the context
	W1114 16:04:47.406973       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:04:47.406981       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:04:47.407310       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1114 16:04:47.407508       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:04:47.408810       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:05:46.201718       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:05:47.407803       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:05:47.407918       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:05:47.407926       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:05:47.409131       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:05:47.409174       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:05:47.409233       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:06:46.202449       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 16:07:46.201007       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:07:47.409091       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:07:47.409374       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:07:47.409429       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:07:47.409493       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:07:47.409558       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:07:47.410926       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] <==
	* I1114 16:02:29.454480       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:02:59.063898       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:02:59.464042       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:03:29.069349       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:03:29.473448       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:03:59.075154       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:03:59.481975       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:04:29.085351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:04:29.493679       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:04:59.091543       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:04:59.506118       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:05:29.097086       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:05:29.515071       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:05:59.103090       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:05:59.527347       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:06:06.178936       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="433.448µs"
	I1114 16:06:20.174424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="402.149µs"
	E1114 16:06:29.110341       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:06:29.536608       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:06:59.115645       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:06:59.550408       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:07:29.120992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:07:29.559439       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:07:59.127365       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:07:59.569087       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] <==
	* I1114 15:54:48.321473       1 server_others.go:69] "Using iptables proxy"
	I1114 15:54:48.340907       1 node.go:141] Successfully retrieved node IP: 192.168.61.196
	I1114 15:54:48.558817       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 15:54:48.559092       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 15:54:48.571155       1 server_others.go:152] "Using iptables Proxier"
	I1114 15:54:48.571510       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 15:54:48.579689       1 server.go:846] "Version info" version="v1.28.3"
	I1114 15:54:48.579819       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:54:48.584466       1 config.go:188] "Starting service config controller"
	I1114 15:54:48.584523       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 15:54:48.584558       1 config.go:97] "Starting endpoint slice config controller"
	I1114 15:54:48.584573       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 15:54:48.586455       1 config.go:315] "Starting node config controller"
	I1114 15:54:48.586681       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 15:54:48.685030       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 15:54:48.685108       1 shared_informer.go:318] Caches are synced for service config
	I1114 15:54:48.686893       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] <==
	* I1114 15:54:44.499682       1 serving.go:348] Generated self-signed cert in-memory
	W1114 15:54:46.361284       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1114 15:54:46.361408       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 15:54:46.361420       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 15:54:46.361426       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 15:54:46.402065       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1114 15:54:46.402305       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:54:46.406904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1114 15:54:46.406962       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 15:54:46.408425       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1114 15:54:46.408528       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1114 15:54:46.507354       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:54:11 UTC, ends at Tue 2023-11-14 16:08:17 UTC. --
	Nov 14 16:05:40 default-k8s-diff-port-529430 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:05:40 default-k8s-diff-port-529430 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:05:40 default-k8s-diff-port-529430 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:05:52 default-k8s-diff-port-529430 kubelet[933]: E1114 16:05:52.168017     933 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 14 16:05:52 default-k8s-diff-port-529430 kubelet[933]: E1114 16:05:52.168058     933 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 14 16:05:52 default-k8s-diff-port-529430 kubelet[933]: E1114 16:05:52.168333     933 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mfnxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-ss2ks_kube-system(73fc9292-8667-473e-b3ca-43c4ae9fbdb9): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 14 16:05:52 default-k8s-diff-port-529430 kubelet[933]: E1114 16:05:52.168376     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:06:06 default-k8s-diff-port-529430 kubelet[933]: E1114 16:06:06.159685     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:06:20 default-k8s-diff-port-529430 kubelet[933]: E1114 16:06:20.158822     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:06:35 default-k8s-diff-port-529430 kubelet[933]: E1114 16:06:35.157420     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:06:40 default-k8s-diff-port-529430 kubelet[933]: E1114 16:06:40.177728     933 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:06:40 default-k8s-diff-port-529430 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:06:40 default-k8s-diff-port-529430 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:06:40 default-k8s-diff-port-529430 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:06:47 default-k8s-diff-port-529430 kubelet[933]: E1114 16:06:47.157475     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:07:02 default-k8s-diff-port-529430 kubelet[933]: E1114 16:07:02.157706     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:07:16 default-k8s-diff-port-529430 kubelet[933]: E1114 16:07:16.158451     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:07:30 default-k8s-diff-port-529430 kubelet[933]: E1114 16:07:30.158973     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:07:40 default-k8s-diff-port-529430 kubelet[933]: E1114 16:07:40.173597     933 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:07:40 default-k8s-diff-port-529430 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:07:40 default-k8s-diff-port-529430 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:07:40 default-k8s-diff-port-529430 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:07:45 default-k8s-diff-port-529430 kubelet[933]: E1114 16:07:45.157345     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:07:57 default-k8s-diff-port-529430 kubelet[933]: E1114 16:07:57.157119     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:08:10 default-k8s-diff-port-529430 kubelet[933]: E1114 16:08:10.157874     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	
	* 
	* ==> storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] <==
	* I1114 15:55:18.477990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 15:55:18.495268       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 15:55:18.495351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 15:55:35.901536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 15:55:35.902304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7d5a72e5-d297-4c5a-85e9-7507bad408b6", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-529430_2806e63b-34b1-4ed2-93a5-38b89e4eb2c2 became leader
	I1114 15:55:35.902431       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-529430_2806e63b-34b1-4ed2-93a5-38b89e4eb2c2!
	I1114 15:55:36.003531       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-529430_2806e63b-34b1-4ed2-93a5-38b89e4eb2c2!
	
	* 
	* ==> storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] <==
	* I1114 15:54:48.142003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1114 15:55:18.143621       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-529430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ss2ks
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-529430 describe pod metrics-server-57f55c9bc5-ss2ks
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-529430 describe pod metrics-server-57f55c9bc5-ss2ks: exit status 1 (74.480787ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ss2ks" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-529430 describe pod metrics-server-57f55c9bc5-ss2ks: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-490998 -n no-preload-490998
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-14 16:09:18.084049337 +0000 UTC m=+5425.614234301
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-490998 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-490998 logs -n 25: (1.615919413s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-331502 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | disable-driver-mounts-331502                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:47 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-490998             | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-279880            | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-842105        | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-529430  | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC | 14 Nov 23 15:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC |                     |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-490998                  | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-279880                 | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 15:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-842105             | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-529430       | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 15:59 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:49:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:49:49.997953  876668 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:49:49.998137  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998147  876668 out.go:309] Setting ErrFile to fd 2...
	I1114 15:49:49.998152  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998369  876668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:49:49.998978  876668 out.go:303] Setting JSON to false
	I1114 15:49:50.000072  876668 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":45142,"bootTime":1699931848,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:49:50.000141  876668 start.go:138] virtualization: kvm guest
	I1114 15:49:50.002690  876668 out.go:177] * [default-k8s-diff-port-529430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:49:50.004392  876668 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:49:50.004396  876668 notify.go:220] Checking for updates...
	I1114 15:49:50.006193  876668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:49:50.007844  876668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:49:50.009232  876668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:49:50.010572  876668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:49:50.011857  876668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:49:50.013604  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:49:50.014059  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.014149  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.028903  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I1114 15:49:50.029290  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.029869  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.029892  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.030244  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.030474  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.030753  876668 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:49:50.031049  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.031096  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.045696  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I1114 15:49:50.046117  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.046625  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.046658  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.047069  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.047303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.082731  876668 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:49:50.084362  876668 start.go:298] selected driver: kvm2
	I1114 15:49:50.084384  876668 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.084517  876668 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:49:50.085533  876668 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.085625  876668 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:49:50.100834  876668 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:49:50.101226  876668 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:49:50.101308  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:49:50.101328  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:49:50.101342  876668 start_flags.go:323] config:
	{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-52943
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.101540  876668 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.103413  876668 out.go:177] * Starting control plane node default-k8s-diff-port-529430 in cluster default-k8s-diff-port-529430
	I1114 15:49:49.196989  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:52.269051  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:50.104763  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:49:50.104815  876668 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:49:50.104835  876668 cache.go:56] Caching tarball of preloaded images
	I1114 15:49:50.104932  876668 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:49:50.104946  876668 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:49:50.105089  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:49:50.105307  876668 start.go:365] acquiring machines lock for default-k8s-diff-port-529430: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:49:58.349061  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:01.421017  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:07.501030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:10.573058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:16.653093  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:19.725040  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:25.805047  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:28.877039  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:34.957084  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:38.029008  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:44.109068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:47.181018  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:53.261065  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:56.333048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:02.413048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:05.485063  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:11.565034  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:14.636996  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:20.717050  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:23.789097  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:29.869058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:32.941066  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:39.021029  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:42.093064  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:48.173074  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:51.245007  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:57.325014  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:00.397111  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:06.477052  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:09.549016  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:15.629105  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:18.701000  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:24.781004  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:27.853046  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:33.933030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:37.005067  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:43.085068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:46.157044  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:52.237056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:55.309080  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:01.389056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:04.461005  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:10.541083  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:13.613033  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:16.617368  876220 start.go:369] acquired machines lock for "embed-certs-279880" in 4m25.691009916s
	I1114 15:53:16.617492  876220 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:16.617500  876220 fix.go:54] fixHost starting: 
	I1114 15:53:16.617993  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:16.618029  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:16.633223  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1114 15:53:16.633787  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:16.634385  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:53:16.634412  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:16.634743  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:16.634958  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:16.635120  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:53:16.636933  876220 fix.go:102] recreateIfNeeded on embed-certs-279880: state=Stopped err=<nil>
	I1114 15:53:16.636967  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	W1114 15:53:16.637164  876220 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:16.638727  876220 out.go:177] * Restarting existing kvm2 VM for "embed-certs-279880" ...
	I1114 15:53:16.615062  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:16.615116  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:53:16.617147  876065 machine.go:91] provisioned docker machine in 4m37.399136623s
	I1114 15:53:16.617196  876065 fix.go:56] fixHost completed within 4m37.422027817s
	I1114 15:53:16.617203  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 4m37.422123699s
	W1114 15:53:16.617282  876065 start.go:691] error starting host: provision: host is not running
	W1114 15:53:16.617491  876065 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1114 15:53:16.617502  876065 start.go:706] Will try again in 5 seconds ...
	I1114 15:53:16.640137  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Start
	I1114 15:53:16.640330  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring networks are active...
	I1114 15:53:16.641029  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network default is active
	I1114 15:53:16.641386  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network mk-embed-certs-279880 is active
	I1114 15:53:16.641738  876220 main.go:141] libmachine: (embed-certs-279880) Getting domain xml...
	I1114 15:53:16.642488  876220 main.go:141] libmachine: (embed-certs-279880) Creating domain...
	I1114 15:53:17.858298  876220 main.go:141] libmachine: (embed-certs-279880) Waiting to get IP...
	I1114 15:53:17.859506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:17.859912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:17.860039  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:17.859881  877182 retry.go:31] will retry after 225.269159ms: waiting for machine to come up
	I1114 15:53:18.086611  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.087099  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.087135  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.087062  877182 retry.go:31] will retry after 322.840106ms: waiting for machine to come up
	I1114 15:53:18.411781  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.412238  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.412278  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.412127  877182 retry.go:31] will retry after 459.77881ms: waiting for machine to come up
	I1114 15:53:18.873994  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.874393  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.874433  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.874341  877182 retry.go:31] will retry after 460.123636ms: waiting for machine to come up
	I1114 15:53:19.335916  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.336488  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.336520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.336414  877182 retry.go:31] will retry after 526.141665ms: waiting for machine to come up
	I1114 15:53:19.864336  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.864830  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.864856  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.864766  877182 retry.go:31] will retry after 817.261813ms: waiting for machine to come up
	I1114 15:53:20.683806  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:20.684289  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:20.684309  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:20.684244  877182 retry.go:31] will retry after 1.026381849s: waiting for machine to come up
	I1114 15:53:21.619196  876065 start.go:365] acquiring machines lock for no-preload-490998: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:53:21.712760  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:21.713237  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:21.713263  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:21.713201  877182 retry.go:31] will retry after 1.088672482s: waiting for machine to come up
	I1114 15:53:22.803222  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:22.803698  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:22.803734  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:22.803639  877182 retry.go:31] will retry after 1.394534659s: waiting for machine to come up
	I1114 15:53:24.199372  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:24.199764  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:24.199794  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:24.199706  877182 retry.go:31] will retry after 1.511094366s: waiting for machine to come up
	I1114 15:53:25.713650  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:25.714062  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:25.714107  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:25.713980  877182 retry.go:31] will retry after 1.821074261s: waiting for machine to come up
	I1114 15:53:27.536875  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:27.537423  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:27.537458  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:27.537349  877182 retry.go:31] will retry after 2.856840662s: waiting for machine to come up
	I1114 15:53:30.395562  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:30.396059  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:30.396086  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:30.396007  877182 retry.go:31] will retry after 4.003431067s: waiting for machine to come up
	I1114 15:53:35.689894  876396 start.go:369] acquired machines lock for "old-k8s-version-842105" in 4m23.221865246s
	I1114 15:53:35.689964  876396 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:35.689973  876396 fix.go:54] fixHost starting: 
	I1114 15:53:35.690410  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:35.690446  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:35.709418  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I1114 15:53:35.709816  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:35.710366  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:53:35.710400  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:35.710760  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:35.710946  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:35.711101  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:53:35.712666  876396 fix.go:102] recreateIfNeeded on old-k8s-version-842105: state=Stopped err=<nil>
	I1114 15:53:35.712696  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	W1114 15:53:35.712882  876396 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:35.715357  876396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-842105" ...
	I1114 15:53:34.403163  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.403706  876220 main.go:141] libmachine: (embed-certs-279880) Found IP for machine: 192.168.39.147
	I1114 15:53:34.403737  876220 main.go:141] libmachine: (embed-certs-279880) Reserving static IP address...
	I1114 15:53:34.403757  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has current primary IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.404290  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.404318  876220 main.go:141] libmachine: (embed-certs-279880) DBG | skip adding static IP to network mk-embed-certs-279880 - found existing host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"}
	I1114 15:53:34.404331  876220 main.go:141] libmachine: (embed-certs-279880) Reserved static IP address: 192.168.39.147
	I1114 15:53:34.404343  876220 main.go:141] libmachine: (embed-certs-279880) Waiting for SSH to be available...
	I1114 15:53:34.404351  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Getting to WaitForSSH function...
	I1114 15:53:34.406833  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407219  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.407248  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407367  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH client type: external
	I1114 15:53:34.407412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa (-rw-------)
	I1114 15:53:34.407481  876220 main.go:141] libmachine: (embed-certs-279880) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:34.407525  876220 main.go:141] libmachine: (embed-certs-279880) DBG | About to run SSH command:
	I1114 15:53:34.407551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | exit 0
	I1114 15:53:34.504225  876220 main.go:141] libmachine: (embed-certs-279880) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:34.504696  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetConfigRaw
	I1114 15:53:34.505414  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.508202  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.508632  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.508685  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.509034  876220 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/config.json ...
	I1114 15:53:34.509282  876220 machine.go:88] provisioning docker machine ...
	I1114 15:53:34.509309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:34.509521  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509730  876220 buildroot.go:166] provisioning hostname "embed-certs-279880"
	I1114 15:53:34.509758  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509894  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.511987  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512285  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.512317  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512472  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.512629  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512751  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512925  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.513118  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.513578  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.513594  876220 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-279880 && echo "embed-certs-279880" | sudo tee /etc/hostname
	I1114 15:53:34.664546  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-279880
	
	I1114 15:53:34.664595  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.667537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.667908  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.667941  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.668142  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.668388  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668631  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668910  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.669142  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.669652  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.669684  876220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-279880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-279880/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-279880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:34.810710  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:34.810745  876220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:34.810768  876220 buildroot.go:174] setting up certificates
	I1114 15:53:34.810780  876220 provision.go:83] configureAuth start
	I1114 15:53:34.810788  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.811138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.814056  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.814537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814747  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.817131  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817513  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.817544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817675  876220 provision.go:138] copyHostCerts
	I1114 15:53:34.817774  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:34.817789  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:34.817869  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:34.817990  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:34.818006  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:34.818039  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:34.818117  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:34.818129  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:34.818161  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:34.818226  876220 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.embed-certs-279880 san=[192.168.39.147 192.168.39.147 localhost 127.0.0.1 minikube embed-certs-279880]
	I1114 15:53:34.925955  876220 provision.go:172] copyRemoteCerts
	I1114 15:53:34.926014  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:34.926039  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.928954  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929322  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.929346  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929520  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.929703  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.929866  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.930033  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.026199  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:35.049682  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1114 15:53:35.072415  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:53:35.097200  876220 provision.go:86] duration metric: configureAuth took 286.405404ms
	I1114 15:53:35.097226  876220 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:35.097425  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:53:35.097558  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.100561  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.100912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.100965  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.101091  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.101296  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101500  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101641  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.101795  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.102165  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.102198  876220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:35.411682  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:35.411719  876220 machine.go:91] provisioned docker machine in 902.419916ms
	I1114 15:53:35.411733  876220 start.go:300] post-start starting for "embed-certs-279880" (driver="kvm2")
	I1114 15:53:35.411748  876220 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:35.411770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.412161  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:35.412201  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.415071  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.415551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.415849  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.416000  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.416143  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.512565  876220 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:35.517087  876220 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:35.517146  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:35.517235  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:35.517356  876220 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:35.517511  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:35.527797  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:35.552798  876220 start.go:303] post-start completed in 141.045785ms
	I1114 15:53:35.552827  876220 fix.go:56] fixHost completed within 18.935326604s
	I1114 15:53:35.552855  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.555540  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.555930  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.555970  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.556155  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.556390  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556573  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.557007  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.557338  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.557348  876220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:35.689729  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977215.639237319
	
	I1114 15:53:35.689759  876220 fix.go:206] guest clock: 1699977215.639237319
	I1114 15:53:35.689769  876220 fix.go:219] Guest: 2023-11-14 15:53:35.639237319 +0000 UTC Remote: 2023-11-14 15:53:35.552830911 +0000 UTC m=+284.779127994 (delta=86.406408ms)
	I1114 15:53:35.689801  876220 fix.go:190] guest clock delta is within tolerance: 86.406408ms
	I1114 15:53:35.689807  876220 start.go:83] releasing machines lock for "embed-certs-279880", held for 19.072338997s
	I1114 15:53:35.689842  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.690197  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:35.692786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693260  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.693311  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693440  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694011  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694222  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694338  876220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:35.694404  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.694455  876220 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:35.694484  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.697198  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697220  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697702  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697732  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697771  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697865  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698085  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698088  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698297  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698303  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698438  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698562  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.698974  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.813318  876220 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:35.819124  876220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:35.957038  876220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:35.964876  876220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:35.964984  876220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:35.980758  876220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:35.980780  876220 start.go:472] detecting cgroup driver to use...
	I1114 15:53:35.980848  876220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:35.993968  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:36.006564  876220 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:36.006626  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:36.021314  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:36.035842  876220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:36.147617  876220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:36.268025  876220 docker.go:219] disabling docker service ...
	I1114 15:53:36.268113  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:36.280847  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:36.292659  876220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:36.414923  876220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:36.534481  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:36.547652  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:36.565229  876220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:53:36.565312  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.574949  876220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:36.575030  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.585105  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.594790  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.603613  876220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:36.613116  876220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:36.620828  876220 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:36.620884  876220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:36.632600  876220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:36.642150  876220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:36.756773  876220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:36.929381  876220 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:36.929467  876220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:36.934735  876220 start.go:540] Will wait 60s for crictl version
	I1114 15:53:36.934790  876220 ssh_runner.go:195] Run: which crictl
	I1114 15:53:36.940182  876220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:36.991630  876220 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:36.991718  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.045160  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.097281  876220 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:53:35.716835  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Start
	I1114 15:53:35.716987  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring networks are active...
	I1114 15:53:35.717715  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network default is active
	I1114 15:53:35.718030  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network mk-old-k8s-version-842105 is active
	I1114 15:53:35.718429  876396 main.go:141] libmachine: (old-k8s-version-842105) Getting domain xml...
	I1114 15:53:35.719055  876396 main.go:141] libmachine: (old-k8s-version-842105) Creating domain...
	I1114 15:53:36.991860  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting to get IP...
	I1114 15:53:36.992911  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:36.993376  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:36.993427  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:36.993318  877295 retry.go:31] will retry after 227.553321ms: waiting for machine to come up
	I1114 15:53:37.223023  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.223561  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.223629  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.223511  877295 retry.go:31] will retry after 308.951372ms: waiting for machine to come up
	I1114 15:53:37.098693  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:37.102205  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102676  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:37.102710  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102955  876220 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:37.107113  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:37.120009  876220 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:53:37.120075  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:37.160178  876220 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:53:37.160241  876220 ssh_runner.go:195] Run: which lz4
	I1114 15:53:37.164351  876220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:53:37.168645  876220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:53:37.168684  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:53:39.026796  876220 crio.go:444] Took 1.862508 seconds to copy over tarball
	I1114 15:53:39.026876  876220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:53:37.534243  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.534797  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.534827  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.534774  877295 retry.go:31] will retry after 440.76682ms: waiting for machine to come up
	I1114 15:53:37.977712  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.978257  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.978287  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.978207  877295 retry.go:31] will retry after 402.601155ms: waiting for machine to come up
	I1114 15:53:38.383001  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.383515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.383551  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.383468  877295 retry.go:31] will retry after 580.977501ms: waiting for machine to come up
	I1114 15:53:38.966457  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.967088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.967121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.967026  877295 retry.go:31] will retry after 679.65563ms: waiting for machine to come up
	I1114 15:53:39.648086  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:39.648560  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:39.648593  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:39.648501  877295 retry.go:31] will retry after 1.014858956s: waiting for machine to come up
	I1114 15:53:40.664728  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:40.665285  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:40.665321  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:40.665230  877295 retry.go:31] will retry after 1.035036164s: waiting for machine to come up
	I1114 15:53:41.701639  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:41.702088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:41.702123  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:41.702029  877295 retry.go:31] will retry after 1.15711647s: waiting for machine to come up
	I1114 15:53:41.885259  876220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.858355323s)
	I1114 15:53:41.885288  876220 crio.go:451] Took 2.858463 seconds to extract the tarball
	I1114 15:53:41.885300  876220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:53:41.926498  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:41.972943  876220 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:53:41.972980  876220 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:53:41.973053  876220 ssh_runner.go:195] Run: crio config
	I1114 15:53:42.038006  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:53:42.038032  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:53:42.038053  876220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:53:42.038071  876220 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-279880 NodeName:embed-certs-279880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:53:42.038234  876220 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-279880"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:53:42.038323  876220 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-279880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:53:42.038394  876220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:53:42.050379  876220 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:53:42.050462  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:53:42.058921  876220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1114 15:53:42.074304  876220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:53:42.090403  876220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1114 15:53:42.106412  876220 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I1114 15:53:42.109907  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:42.122915  876220 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880 for IP: 192.168.39.147
	I1114 15:53:42.122945  876220 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:53:42.123106  876220 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:53:42.123148  876220 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:53:42.123226  876220 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/client.key
	I1114 15:53:42.123290  876220 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key.a88b087d
	I1114 15:53:42.123322  876220 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key
	I1114 15:53:42.123430  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:53:42.123456  876220 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:53:42.123467  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:53:42.123486  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:53:42.123517  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:53:42.123541  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:53:42.123584  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:42.124261  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:53:42.149787  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:53:42.177563  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:53:42.203326  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:53:42.228832  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:53:42.254674  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:53:42.280548  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:53:42.305318  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:53:42.331461  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:53:42.356555  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:53:42.382688  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:53:42.407945  876220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:53:42.424902  876220 ssh_runner.go:195] Run: openssl version
	I1114 15:53:42.430411  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:53:42.443033  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448429  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448496  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.455631  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:53:42.466421  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:53:42.476013  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480381  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480434  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.486048  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:53:42.495375  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:53:42.505366  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509762  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509804  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.515519  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:53:42.524838  876220 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:53:42.528912  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:53:42.534641  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:53:42.540138  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:53:42.545849  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:53:42.551518  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:53:42.559001  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:53:42.566135  876220 kubeadm.go:404] StartCluster: {Name:embed-certs-279880 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:53:42.566241  876220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:53:42.566297  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:42.613075  876220 cri.go:89] found id: ""
	I1114 15:53:42.613158  876220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:53:42.622675  876220 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:53:42.622696  876220 kubeadm.go:636] restartCluster start
	I1114 15:53:42.622785  876220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:53:42.631529  876220 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.633202  876220 kubeconfig.go:92] found "embed-certs-279880" server: "https://192.168.39.147:8443"
	I1114 15:53:42.636588  876220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:53:42.645531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.645578  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.656733  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.656764  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.656807  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.667524  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.168290  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.168372  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.181051  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.668650  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.668772  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.681727  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.168359  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.168462  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.182012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.668666  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.668763  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.680872  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.168505  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.168625  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.180321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.667875  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.668016  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.680318  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.861352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:42.861900  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:42.861963  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:42.861836  877295 retry.go:31] will retry after 2.117184279s: waiting for machine to come up
	I1114 15:53:44.982059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:44.982506  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:44.982538  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:44.982449  877295 retry.go:31] will retry after 2.3999215s: waiting for machine to come up
	I1114 15:53:46.168271  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.168410  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.180809  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:46.667886  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.668009  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.679468  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.168072  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.168204  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.180268  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.667786  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.667948  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.678927  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.168531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.168660  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.180004  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.668597  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.668752  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.680945  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.168543  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.168635  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.180012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.668382  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.668486  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.683691  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.168265  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.168353  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.179169  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.667618  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.667730  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.678707  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.384177  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:47.384695  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:47.384734  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:47.384649  877295 retry.go:31] will retry after 2.820309413s: waiting for machine to come up
	I1114 15:53:50.208736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:50.209188  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:50.209221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:50.209130  877295 retry.go:31] will retry after 2.822648093s: waiting for machine to come up
	I1114 15:53:51.168046  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.168144  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.179168  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:51.668301  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.668407  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.680321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.167971  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:52.168062  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:52.179159  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.645656  876220 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:53:52.645688  876220 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:53:52.645702  876220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:53:52.645806  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:52.682368  876220 cri.go:89] found id: ""
	I1114 15:53:52.682482  876220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:53:52.697186  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:53:52.705449  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:53:52.705503  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714019  876220 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714054  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:52.831334  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.796131  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.984427  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.060195  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.137132  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:53:54.137217  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.155040  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.676264  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.176129  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.676614  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:53.034614  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:53.035044  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:53.035078  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:53.034993  877295 retry.go:31] will retry after 4.160398149s: waiting for machine to come up
	I1114 15:53:57.196776  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197211  876396 main.go:141] libmachine: (old-k8s-version-842105) Found IP for machine: 192.168.72.151
	I1114 15:53:57.197240  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserving static IP address...
	I1114 15:53:57.197260  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has current primary IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197667  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.197700  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserved static IP address: 192.168.72.151
	I1114 15:53:57.197724  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | skip adding static IP to network mk-old-k8s-version-842105 - found existing host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"}
	I1114 15:53:57.197742  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting for SSH to be available...
	I1114 15:53:57.197754  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Getting to WaitForSSH function...
	I1114 15:53:57.200279  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200646  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.200681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200916  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH client type: external
	I1114 15:53:57.200948  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa (-rw-------)
	I1114 15:53:57.200983  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:57.200999  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | About to run SSH command:
	I1114 15:53:57.201015  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | exit 0
	I1114 15:53:57.288554  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:57.288904  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetConfigRaw
	I1114 15:53:57.289691  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.292087  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292445  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.292501  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292720  876396 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/config.json ...
	I1114 15:53:57.292930  876396 machine.go:88] provisioning docker machine ...
	I1114 15:53:57.292950  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:57.293164  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293326  876396 buildroot.go:166] provisioning hostname "old-k8s-version-842105"
	I1114 15:53:57.293352  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293472  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.295765  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296139  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.296170  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296299  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.296470  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296625  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296768  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.296945  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.297524  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.297546  876396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-842105 && echo "old-k8s-version-842105" | sudo tee /etc/hostname
	I1114 15:53:58.537304  876668 start.go:369] acquired machines lock for "default-k8s-diff-port-529430" in 4m8.43196122s
	I1114 15:53:58.537380  876668 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:58.537392  876668 fix.go:54] fixHost starting: 
	I1114 15:53:58.537828  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:58.537865  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:58.555361  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I1114 15:53:58.555809  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:58.556346  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:53:58.556379  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:58.556762  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:58.556993  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:53:58.557144  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:53:58.558707  876668 fix.go:102] recreateIfNeeded on default-k8s-diff-port-529430: state=Stopped err=<nil>
	I1114 15:53:58.558736  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	W1114 15:53:58.558888  876668 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:58.561175  876668 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-529430" ...
	I1114 15:53:57.423888  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-842105
	
	I1114 15:53:57.423971  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.427115  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427421  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.427459  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427660  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.427882  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428135  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428351  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.428584  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.429089  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.429124  876396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-842105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-842105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-842105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:57.554847  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:57.554893  876396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:57.554957  876396 buildroot.go:174] setting up certificates
	I1114 15:53:57.554974  876396 provision.go:83] configureAuth start
	I1114 15:53:57.554989  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.555342  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.558305  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.558711  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558876  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.561568  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.561937  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.561973  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.562106  876396 provision.go:138] copyHostCerts
	I1114 15:53:57.562196  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:57.562218  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:57.562284  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:57.562402  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:57.562413  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:57.562445  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:57.562520  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:57.562532  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:57.562561  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:57.562631  876396 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-842105 san=[192.168.72.151 192.168.72.151 localhost 127.0.0.1 minikube old-k8s-version-842105]
	I1114 15:53:57.825621  876396 provision.go:172] copyRemoteCerts
	I1114 15:53:57.825706  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:57.825739  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.828352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828732  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.828778  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828924  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.829159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.829356  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.829505  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:57.913614  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:57.935960  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 15:53:57.957927  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:53:57.980061  876396 provision.go:86] duration metric: configureAuth took 425.071777ms
	I1114 15:53:57.980109  876396 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:57.980308  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:53:57.980405  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.983736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984128  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.984161  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984367  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.984574  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984927  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.985116  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.985478  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.985505  876396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:58.297063  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:58.297107  876396 machine.go:91] provisioned docker machine in 1.004160647s
	I1114 15:53:58.297121  876396 start.go:300] post-start starting for "old-k8s-version-842105" (driver="kvm2")
	I1114 15:53:58.297135  876396 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:58.297159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.297578  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:58.297624  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.300608  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301051  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.301081  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301312  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.301485  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.301655  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.301774  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.387785  876396 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:58.391947  876396 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:58.391974  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:58.392056  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:58.392177  876396 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:58.392301  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:58.401525  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:58.422853  876396 start.go:303] post-start completed in 125.713467ms
	I1114 15:53:58.422892  876396 fix.go:56] fixHost completed within 22.732917848s
	I1114 15:53:58.422922  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.425682  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.426098  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426282  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.426487  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426663  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426830  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.427040  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:58.427400  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:58.427416  876396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:58.537121  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977238.485050071
	
	I1114 15:53:58.537151  876396 fix.go:206] guest clock: 1699977238.485050071
	I1114 15:53:58.537161  876396 fix.go:219] Guest: 2023-11-14 15:53:58.485050071 +0000 UTC Remote: 2023-11-14 15:53:58.422897103 +0000 UTC m=+286.112017318 (delta=62.152968ms)
	I1114 15:53:58.537187  876396 fix.go:190] guest clock delta is within tolerance: 62.152968ms
	I1114 15:53:58.537206  876396 start.go:83] releasing machines lock for "old-k8s-version-842105", held for 22.847251095s
	I1114 15:53:58.537248  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.537548  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:58.540515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.540932  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.540974  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.541110  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541612  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541912  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.542012  876396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:58.542077  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.542171  876396 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:58.542202  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.544841  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545190  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545246  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545465  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.545666  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.545694  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545714  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545816  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.545905  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.546006  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.546067  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.546212  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.546365  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.626301  876396 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:58.651834  876396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:58.799770  876396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:58.806042  876396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:58.806134  876396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:58.824707  876396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:58.824752  876396 start.go:472] detecting cgroup driver to use...
	I1114 15:53:58.824824  876396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:58.840144  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:58.854846  876396 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:58.854905  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:58.869926  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:58.883517  876396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:58.990519  876396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:59.108637  876396 docker.go:219] disabling docker service ...
	I1114 15:53:59.108712  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:59.124681  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:59.138748  876396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:59.260422  876396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:59.364365  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:59.376773  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:59.394948  876396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1114 15:53:59.395027  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.404000  876396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:59.404074  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.412822  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.421316  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.429685  876396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:59.438818  876396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:59.446459  876396 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:59.446533  876396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:59.459160  876396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:59.467670  876396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:59.579125  876396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:59.794436  876396 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:59.794525  876396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:59.801013  876396 start.go:540] Will wait 60s for crictl version
	I1114 15:53:59.801095  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:53:59.805735  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:59.851270  876396 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:59.851383  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.898885  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.953911  876396 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1114 15:53:58.562788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Start
	I1114 15:53:58.562971  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring networks are active...
	I1114 15:53:58.563570  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network default is active
	I1114 15:53:58.564001  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network mk-default-k8s-diff-port-529430 is active
	I1114 15:53:58.564406  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Getting domain xml...
	I1114 15:53:58.565186  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Creating domain...
	I1114 15:53:59.907130  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting to get IP...
	I1114 15:53:59.908507  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.908991  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.909128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:53:59.908977  877437 retry.go:31] will retry after 306.122553ms: waiting for machine to come up
	I1114 15:53:56.176595  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.676568  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.699015  876220 api_server.go:72] duration metric: took 2.561885213s to wait for apiserver process to appear ...
	I1114 15:53:56.699041  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:53:56.699058  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:53:59.955466  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:59.959121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959545  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:59.959572  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959896  876396 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:59.965859  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:59.982494  876396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 15:53:59.982563  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:00.029401  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:00.029483  876396 ssh_runner.go:195] Run: which lz4
	I1114 15:54:00.034065  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:00.039738  876396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:00.039780  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1114 15:54:01.846049  876396 crio.go:444] Took 1.812024 seconds to copy over tarball
	I1114 15:54:01.846160  876396 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:01.387625  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.387668  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.387690  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.430505  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.430539  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.930801  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.937138  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:01.937169  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.431712  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.442719  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:02.442758  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.931021  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.938062  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:54:02.947420  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:02.947453  876220 api_server.go:131] duration metric: took 6.248404315s to wait for apiserver health ...
	I1114 15:54:02.947465  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:54:02.947479  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:02.949231  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:00.216693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217419  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217476  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.217346  877437 retry.go:31] will retry after 276.469735ms: waiting for machine to come up
	I1114 15:54:00.496200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496596  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496632  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.496550  877437 retry.go:31] will retry after 390.20616ms: waiting for machine to come up
	I1114 15:54:00.888367  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889341  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.889235  877437 retry.go:31] will retry after 551.896336ms: waiting for machine to come up
	I1114 15:54:01.443159  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443794  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443825  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:01.443756  877437 retry.go:31] will retry after 655.228992ms: waiting for machine to come up
	I1114 15:54:02.100194  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100681  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100716  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.100609  877437 retry.go:31] will retry after 896.817469ms: waiting for machine to come up
	I1114 15:54:02.999296  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999947  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999979  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.999897  877437 retry.go:31] will retry after 1.177419274s: waiting for machine to come up
	I1114 15:54:04.178783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179425  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179452  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:04.179351  877437 retry.go:31] will retry after 1.259348434s: waiting for machine to come up
	I1114 15:54:02.950643  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:02.986775  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:03.054339  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:03.074346  876220 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:03.074405  876220 system_pods.go:61] "coredns-5dd5756b68-gqxld" [0b846e58-0bbc-4770-94a4-8324753b36c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:03.074428  876220 system_pods.go:61] "etcd-embed-certs-279880" [e085e7a7-ec2e-4cf6-bbb2-d242a5e8d075] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:03.074442  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [4ffbfbaf-9978-4bb1-9e4e-ef23365f78fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:03.074455  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [d895906c-899f-41b3-9484-1a6985b978f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:03.074471  876220 system_pods.go:61] "kube-proxy-j2qnm" [feee8604-a749-4908-8361-42f63d55ec64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:03.074485  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [4325a0ba-9013-4899-b01b-befcb4cd5b72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:03.074504  876220 system_pods.go:61] "metrics-server-57f55c9bc5-gvtbw" [a7c44219-4b00-49c0-817f-68f9499f1ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:03.074531  876220 system_pods.go:61] "storage-provisioner" [f464123e-8329-4785-87ae-78ff30ac7d27] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:03.074547  876220 system_pods.go:74] duration metric: took 20.179327ms to wait for pod list to return data ...
	I1114 15:54:03.074558  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:03.078482  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:03.078526  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:03.078542  876220 node_conditions.go:105] duration metric: took 3.972732ms to run NodePressure ...
	I1114 15:54:03.078565  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:03.514232  876220 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521097  876220 kubeadm.go:787] kubelet initialised
	I1114 15:54:03.521125  876220 kubeadm.go:788] duration metric: took 6.859971ms waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521168  876220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:03.528777  876220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:05.249338  876396 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.403140591s)
	I1114 15:54:05.249383  876396 crio.go:451] Took 3.403300 seconds to extract the tarball
	I1114 15:54:05.249397  876396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:05.298779  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:05.351838  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:05.351873  876396 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:05.352034  876396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.352124  876396 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.352201  876396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.352219  876396 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.352067  876396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.352087  876396 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354089  876396 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1114 15:54:05.354101  876396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.354115  876396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.354117  876396 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354097  876396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.354178  876396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.354197  876396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.354270  876396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.512829  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.521658  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.529228  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1114 15:54:05.529451  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.529597  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.529802  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.534672  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.613591  876396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1114 15:54:05.613650  876396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.613721  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.644613  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.668090  876396 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1114 15:54:05.668167  876396 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.668231  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.685343  876396 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1114 15:54:05.685398  876396 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1114 15:54:05.685458  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725459  876396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1114 15:54:05.725508  876396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.725523  876396 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1114 15:54:05.725561  876396 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.725565  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725602  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727180  876396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1114 15:54:05.727215  876396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.727249  876396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1114 15:54:05.727283  876396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.727254  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727322  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.727325  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.849608  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.849657  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1114 15:54:05.849694  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.849747  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.849753  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.849830  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1114 15:54:05.849847  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.990379  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1114 15:54:05.990536  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1114 15:54:06.006943  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1114 15:54:06.006966  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1114 15:54:06.007017  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1114 15:54:06.007076  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.007134  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1114 15:54:06.013121  876396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1114 15:54:06.013141  876396 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.013192  876396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1114 15:54:05.440685  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441307  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441342  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:05.441243  877437 retry.go:31] will retry after 1.84307404s: waiting for machine to come up
	I1114 15:54:07.286027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286581  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286612  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:07.286501  877437 retry.go:31] will retry after 2.149522769s: waiting for machine to come up
	I1114 15:54:09.437500  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.437998  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.438027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:09.437930  877437 retry.go:31] will retry after 1.825733531s: waiting for machine to come up
	I1114 15:54:06.558998  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.056443  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.550292  876220 pod_ready.go:92] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:09.550325  876220 pod_ready.go:81] duration metric: took 6.02152032s waiting for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:09.550338  876220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:07.587512  876396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.574275406s)
	I1114 15:54:07.587549  876396 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1114 15:54:07.587609  876396 cache_images.go:92] LoadImages completed in 2.235719587s
	W1114 15:54:07.587745  876396 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1114 15:54:07.587935  876396 ssh_runner.go:195] Run: crio config
	I1114 15:54:07.677561  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:07.677590  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:07.677624  876396 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:07.677649  876396 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-842105 NodeName:old-k8s-version-842105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 15:54:07.677852  876396 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-842105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-842105
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.151:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:07.677991  876396 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-842105 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:54:07.678072  876396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1114 15:54:07.690041  876396 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:07.690195  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:07.699428  876396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1114 15:54:07.717871  876396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:07.736451  876396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1114 15:54:07.760405  876396 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:07.766002  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:07.782987  876396 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105 for IP: 192.168.72.151
	I1114 15:54:07.783024  876396 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:07.783232  876396 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:07.783328  876396 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:07.783435  876396 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/client.key
	I1114 15:54:07.783530  876396 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key.8e16fdf2
	I1114 15:54:07.783587  876396 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key
	I1114 15:54:07.783733  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:07.783774  876396 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:07.783788  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:07.783825  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:07.783860  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:07.783903  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:07.783976  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:07.784951  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:07.817959  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:07.849497  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:07.882885  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:54:07.917706  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:07.951168  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:07.980449  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:08.004910  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:08.038634  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:08.068999  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:08.099934  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:08.131714  876396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:08.150662  876396 ssh_runner.go:195] Run: openssl version
	I1114 15:54:08.158258  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:08.168218  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173533  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173650  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.179886  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:08.189654  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:08.199563  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204439  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204512  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.210587  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:08.220509  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:08.233859  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240418  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240484  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.248025  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:08.261693  876396 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:08.267518  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:08.275553  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:08.283812  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:08.292063  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:08.299976  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:08.307726  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:08.315248  876396 kubeadm.go:404] StartCluster: {Name:old-k8s-version-842105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:08.315441  876396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:08.315509  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:08.373222  876396 cri.go:89] found id: ""
	I1114 15:54:08.373309  876396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:08.386081  876396 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:08.386113  876396 kubeadm.go:636] restartCluster start
	I1114 15:54:08.386175  876396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:08.398113  876396 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.399779  876396 kubeconfig.go:92] found "old-k8s-version-842105" server: "https://192.168.72.151:8443"
	I1114 15:54:08.403355  876396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:08.415044  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.415107  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.431221  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.431246  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.431301  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.441629  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.941906  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.942002  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.953895  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.442080  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.442167  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.454396  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.941960  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.942060  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.957741  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.442467  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.442585  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.459029  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.942110  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.942218  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.958207  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.441724  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.441846  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.456551  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.942092  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.942207  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.954734  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.265162  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265717  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:11.265645  877437 retry.go:31] will retry after 3.454522942s: waiting for machine to come up
	I1114 15:54:14.722448  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722869  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722900  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:14.722811  877437 retry.go:31] will retry after 4.385736497s: waiting for machine to come up
	I1114 15:54:11.568989  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:11.569021  876220 pod_ready.go:81] duration metric: took 2.018672405s waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:11.569032  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:13.599380  876220 pod_ready.go:102] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:15.095781  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.095806  876220 pod_ready.go:81] duration metric: took 3.52676767s waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.095816  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101837  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.101860  876220 pod_ready.go:81] duration metric: took 6.035008ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101871  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107099  876220 pod_ready.go:92] pod "kube-proxy-j2qnm" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.107119  876220 pod_ready.go:81] duration metric: took 5.239707ms waiting for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107131  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146726  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.146753  876220 pod_ready.go:81] duration metric: took 39.614218ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146765  876220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:12.442685  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.442780  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.456555  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:12.941805  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.941902  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.955572  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.442111  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.442220  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.455769  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.941932  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.942051  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.957167  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.442727  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.442855  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.455220  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.941815  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.941911  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.955030  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.441942  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.442064  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.454228  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.942207  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.942299  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.955845  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.442537  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.442642  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.454339  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.941837  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.941933  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.955292  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:19.110067  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.110621  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Found IP for machine: 192.168.61.196
	I1114 15:54:19.110650  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserving static IP address...
	I1114 15:54:19.110682  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has current primary IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.111082  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.111142  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | skip adding static IP to network mk-default-k8s-diff-port-529430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"}
	I1114 15:54:19.111163  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserved static IP address: 192.168.61.196
	I1114 15:54:19.111178  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for SSH to be available...
	I1114 15:54:19.111191  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Getting to WaitForSSH function...
	I1114 15:54:19.113739  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114145  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.114196  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114327  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH client type: external
	I1114 15:54:19.114358  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa (-rw-------)
	I1114 15:54:19.114395  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:19.114417  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | About to run SSH command:
	I1114 15:54:19.114432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | exit 0
	I1114 15:54:19.213651  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:19.214087  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetConfigRaw
	I1114 15:54:19.214767  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.217678  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218072  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.218099  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218414  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:54:19.218634  876668 machine.go:88] provisioning docker machine ...
	I1114 15:54:19.218662  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:19.218923  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219132  876668 buildroot.go:166] provisioning hostname "default-k8s-diff-port-529430"
	I1114 15:54:19.219155  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219292  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.221719  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222106  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.222129  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222272  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.222435  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222606  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222748  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.222907  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.223312  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.223328  876668 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-529430 && echo "default-k8s-diff-port-529430" | sudo tee /etc/hostname
	I1114 15:54:19.373658  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-529430
	
	I1114 15:54:19.373691  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.376972  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377388  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.377432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377549  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.377754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.377934  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.378123  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.378325  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.378667  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.378685  876668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-529430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-529430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-529430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:19.523410  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:19.523453  876668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:19.523498  876668 buildroot.go:174] setting up certificates
	I1114 15:54:19.523511  876668 provision.go:83] configureAuth start
	I1114 15:54:19.523530  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.523872  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.526757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527213  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.527242  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.530193  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530590  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.530630  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530794  876668 provision.go:138] copyHostCerts
	I1114 15:54:19.530862  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:19.530886  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:19.530965  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:19.531069  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:19.531078  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:19.531104  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:19.531179  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:19.531188  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:19.531218  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:19.531285  876668 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-529430 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube default-k8s-diff-port-529430]
	I1114 15:54:19.845785  876668 provision.go:172] copyRemoteCerts
	I1114 15:54:19.845852  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:19.845880  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.849070  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849461  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.849492  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.849916  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.850139  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.850326  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:19.946041  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:19.976301  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1114 15:54:20.667697  876065 start.go:369] acquired machines lock for "no-preload-490998" in 59.048435079s
	I1114 15:54:20.667765  876065 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:54:20.667776  876065 fix.go:54] fixHost starting: 
	I1114 15:54:20.668233  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:20.668278  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:20.689041  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I1114 15:54:20.689574  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:20.690138  876065 main.go:141] libmachine: Using API Version  1
	I1114 15:54:20.690168  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:20.690554  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:20.690760  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:20.690909  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 15:54:20.692627  876065 fix.go:102] recreateIfNeeded on no-preload-490998: state=Stopped err=<nil>
	I1114 15:54:20.692652  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	W1114 15:54:20.692849  876065 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:54:20.694674  876065 out.go:177] * Restarting existing kvm2 VM for "no-preload-490998" ...
	I1114 15:54:17.454958  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:19.455250  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:20.001972  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:54:20.026531  876668 provision.go:86] duration metric: configureAuth took 502.998106ms
	I1114 15:54:20.026585  876668 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:20.026832  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:20.026965  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.030385  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.030791  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030974  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.031200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031423  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.031861  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.032341  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.032367  876668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:20.394771  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:20.394805  876668 machine.go:91] provisioned docker machine in 1.176155811s
	I1114 15:54:20.394818  876668 start.go:300] post-start starting for "default-k8s-diff-port-529430" (driver="kvm2")
	I1114 15:54:20.394832  876668 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:20.394853  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.395240  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:20.395288  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.398478  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.398906  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.398945  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.399107  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.399344  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.399584  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.399752  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.491251  876668 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:20.495507  876668 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:20.495538  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:20.495627  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:20.495718  876668 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:20.495814  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:20.504112  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:20.527100  876668 start.go:303] post-start completed in 132.264495ms
	I1114 15:54:20.527124  876668 fix.go:56] fixHost completed within 21.989733182s
	I1114 15:54:20.527150  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.530055  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530460  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.530502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530660  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.530868  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531069  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531281  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.531458  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.531874  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.531889  876668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:20.667502  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977260.612374456
	
	I1114 15:54:20.667529  876668 fix.go:206] guest clock: 1699977260.612374456
	I1114 15:54:20.667536  876668 fix.go:219] Guest: 2023-11-14 15:54:20.612374456 +0000 UTC Remote: 2023-11-14 15:54:20.527127621 +0000 UTC m=+270.585277055 (delta=85.246835ms)
	I1114 15:54:20.667591  876668 fix.go:190] guest clock delta is within tolerance: 85.246835ms
	I1114 15:54:20.667604  876668 start.go:83] releasing machines lock for "default-k8s-diff-port-529430", held for 22.130251397s
	I1114 15:54:20.667642  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.668017  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:20.671690  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672166  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.672199  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672583  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673190  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673412  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673507  876668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:20.673573  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.673677  876668 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:20.673702  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.677394  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677505  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677813  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.677847  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678133  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.678165  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678228  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678331  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678456  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678543  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678799  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.679008  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.770378  876668 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:20.799026  876668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:20.952410  876668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:20.960020  876668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:20.960164  876668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:20.976497  876668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:20.976537  876668 start.go:472] detecting cgroup driver to use...
	I1114 15:54:20.976623  876668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:20.995510  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:21.008750  876668 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:21.008824  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:21.021811  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:21.035329  876668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:21.148775  876668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:21.285242  876668 docker.go:219] disabling docker service ...
	I1114 15:54:21.285318  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:21.298782  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:21.316123  876668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:21.488090  876668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:21.618889  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:21.632974  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:21.655781  876668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:21.655882  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.669231  876668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:21.669316  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.678786  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.688193  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.698797  876668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:21.709360  876668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:21.718312  876668 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:21.718380  876668 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:21.736502  876668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:21.746439  876668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:21.863214  876668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:22.102179  876668 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:22.102265  876668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:22.108046  876668 start.go:540] Will wait 60s for crictl version
	I1114 15:54:22.108121  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:54:22.113795  876668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:22.165127  876668 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:22.165229  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.225931  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.294400  876668 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:17.442023  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.442115  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.454984  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:17.942288  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.942367  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.954587  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:18.415437  876396 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:18.415476  876396 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:18.415510  876396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:18.415594  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:18.457148  876396 cri.go:89] found id: ""
	I1114 15:54:18.457220  876396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:18.473763  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:18.482554  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:18.482618  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491282  876396 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491331  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:18.611750  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.639893  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.02808682s)
	I1114 15:54:19.639964  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.850775  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.939183  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:20.055296  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:20.055384  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.076978  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.591616  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.091982  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.591312  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.635294  876396 api_server.go:72] duration metric: took 1.579988958s to wait for apiserver process to appear ...
	I1114 15:54:21.635323  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:21.635345  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:20.696162  876065 main.go:141] libmachine: (no-preload-490998) Calling .Start
	I1114 15:54:20.696380  876065 main.go:141] libmachine: (no-preload-490998) Ensuring networks are active...
	I1114 15:54:20.697208  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network default is active
	I1114 15:54:20.697665  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network mk-no-preload-490998 is active
	I1114 15:54:20.698105  876065 main.go:141] libmachine: (no-preload-490998) Getting domain xml...
	I1114 15:54:20.698815  876065 main.go:141] libmachine: (no-preload-490998) Creating domain...
	I1114 15:54:22.152078  876065 main.go:141] libmachine: (no-preload-490998) Waiting to get IP...
	I1114 15:54:22.153475  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.153983  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.154071  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.153960  877583 retry.go:31] will retry after 305.242943ms: waiting for machine to come up
	I1114 15:54:22.460636  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.461432  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.461609  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.461568  877583 retry.go:31] will retry after 354.226558ms: waiting for machine to come up
	I1114 15:54:22.817225  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.817884  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.817999  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.817955  877583 retry.go:31] will retry after 337.727596ms: waiting for machine to come up
	I1114 15:54:23.157897  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.158614  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.158724  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.158679  877583 retry.go:31] will retry after 375.356441ms: waiting for machine to come up
	I1114 15:54:23.536061  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.536607  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.536633  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.536565  877583 retry.go:31] will retry after 652.853452ms: waiting for machine to come up
	I1114 15:54:22.295757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:22.299345  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.299749  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:22.299788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.300017  876668 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:22.305363  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:22.318715  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:22.318773  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:22.368522  876668 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:22.368595  876668 ssh_runner.go:195] Run: which lz4
	I1114 15:54:22.373798  876668 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:22.379337  876668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:22.379368  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:54:24.194028  876668 crio.go:444] Took 1.820276 seconds to copy over tarball
	I1114 15:54:24.194111  876668 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:21.457059  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:23.458432  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:26.636325  876396 api_server.go:269] stopped: https://192.168.72.151:8443/healthz: Get "https://192.168.72.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1114 15:54:26.636396  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:24.191080  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:24.191648  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:24.191685  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:24.191565  877583 retry.go:31] will retry after 883.93292ms: waiting for machine to come up
	I1114 15:54:25.076820  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:25.077325  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:25.077370  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:25.077290  877583 retry.go:31] will retry after 1.071889504s: waiting for machine to come up
	I1114 15:54:26.151239  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:26.151777  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:26.151812  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:26.151734  877583 retry.go:31] will retry after 1.05055701s: waiting for machine to come up
	I1114 15:54:27.204714  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:27.205193  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:27.205216  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:27.205147  877583 retry.go:31] will retry after 1.366779273s: waiting for machine to come up
	I1114 15:54:28.573131  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:28.573578  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:28.573605  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:28.573548  877583 retry.go:31] will retry after 1.629033633s: waiting for machine to come up
	I1114 15:54:27.635092  876668 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.440943465s)
	I1114 15:54:27.635134  876668 crio.go:451] Took 3.441078 seconds to extract the tarball
	I1114 15:54:27.635148  876668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:27.685486  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:27.742411  876668 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:54:27.742499  876668 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:54:27.742596  876668 ssh_runner.go:195] Run: crio config
	I1114 15:54:27.815555  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:27.815579  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:27.815601  876668 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:27.815624  876668 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-529430 NodeName:default-k8s-diff-port-529430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:54:27.815789  876668 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-529430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:27.815921  876668 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-529430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1114 15:54:27.815999  876668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:54:27.825716  876668 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:27.825799  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:27.838987  876668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1114 15:54:27.855187  876668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:27.872995  876668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1114 15:54:27.890455  876668 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:27.895678  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:27.909953  876668 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430 for IP: 192.168.61.196
	I1114 15:54:27.909999  876668 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:27.910204  876668 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:27.910271  876668 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:27.910463  876668 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/client.key
	I1114 15:54:27.910558  876668 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key.0d67e2f2
	I1114 15:54:27.910616  876668 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key
	I1114 15:54:27.910753  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:27.910797  876668 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:27.910811  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:27.910872  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:27.910917  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:27.910950  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:27.911007  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:27.911985  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:27.937341  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:27.963511  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:27.990011  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:54:28.016668  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:28.048528  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:28.077392  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:28.107784  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:28.136600  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:28.163995  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:28.191715  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:28.223205  876668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:28.243672  876668 ssh_runner.go:195] Run: openssl version
	I1114 15:54:28.249895  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:28.260568  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266792  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266887  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.273048  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:28.283458  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:28.294810  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300316  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300384  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.306193  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:28.319260  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:28.332843  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339044  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339120  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.346094  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:28.359711  876668 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:28.365300  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:28.372965  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:28.380378  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:28.387801  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:28.395228  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:28.401252  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:28.407435  876668 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:28.407581  876668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:28.407663  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:28.462877  876668 cri.go:89] found id: ""
	I1114 15:54:28.462962  876668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:28.473800  876668 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:28.473828  876668 kubeadm.go:636] restartCluster start
	I1114 15:54:28.473885  876668 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:28.485255  876668 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.486649  876668 kubeconfig.go:92] found "default-k8s-diff-port-529430" server: "https://192.168.61.196:8444"
	I1114 15:54:28.489408  876668 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:28.499927  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.499990  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.512175  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.512193  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.512238  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.524128  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.025143  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.025234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.040757  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.525035  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.525153  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.538214  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.174172  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:28.174207  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:28.674934  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.145414  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.145459  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.174596  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.231115  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.231157  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.674653  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.813013  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.813052  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.174424  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.183371  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:30.183427  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.675007  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.686069  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:54:30.697376  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:54:30.697472  876396 api_server.go:131] duration metric: took 9.062139934s to wait for apiserver health ...
	I1114 15:54:30.697503  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:30.697535  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:30.699476  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:25.957052  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:28.490572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:30.701025  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:30.729153  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:30.770856  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:30.785989  876396 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:30.786041  876396 system_pods.go:61] "coredns-5644d7b6d9-dxtd8" [4d22eb1f-551c-49a1-a519-7420c3774e46] Running
	I1114 15:54:30.786051  876396 system_pods.go:61] "etcd-old-k8s-version-842105" [d4d5d869-b609-4017-8cf1-071b11f69d18] Running
	I1114 15:54:30.786057  876396 system_pods.go:61] "kube-apiserver-old-k8s-version-842105" [43e84141-4938-4808-bba5-14080a0a7b9e] Running
	I1114 15:54:30.786063  876396 system_pods.go:61] "kube-controller-manager-old-k8s-version-842105" [8fca7797-f3a1-4223-a921-0819aca95ce7] Running
	I1114 15:54:30.786069  876396 system_pods.go:61] "kube-proxy-kw2ns" [c6b5fbe3-a473-4120-bc41-fb85f6d3841d] Running
	I1114 15:54:30.786074  876396 system_pods.go:61] "kube-scheduler-old-k8s-version-842105" [c9cad8bb-b7a9-44fd-92d3-d3360284c9f3] Running
	I1114 15:54:30.786082  876396 system_pods.go:61] "metrics-server-74d5856cc6-q9hc5" [1333b6de-5f3f-4937-8e73-d2b7f2c6d37e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:30.786091  876396 system_pods.go:61] "storage-provisioner" [2d95ef7e-626e-4840-9f5d-708cd8c66576] Running
	I1114 15:54:30.786107  876396 system_pods.go:74] duration metric: took 15.207693ms to wait for pod list to return data ...
	I1114 15:54:30.786125  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:30.799034  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:30.799089  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:30.799105  876396 node_conditions.go:105] duration metric: took 12.974469ms to run NodePressure ...
	I1114 15:54:30.799137  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:31.065040  876396 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:31.068697  876396 retry.go:31] will retry after 147.435912ms: kubelet not initialised
	I1114 15:54:31.225671  876396 retry.go:31] will retry after 334.031544ms: kubelet not initialised
	I1114 15:54:31.565487  876396 retry.go:31] will retry after 641.328262ms: kubelet not initialised
	I1114 15:54:32.215327  876396 retry.go:31] will retry after 1.211422414s: kubelet not initialised
	I1114 15:54:30.204276  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:30.204775  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:30.204811  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:30.204713  877583 retry.go:31] will retry after 1.909641151s: waiting for machine to come up
	I1114 15:54:32.115658  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:32.116175  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:32.116209  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:32.116116  877583 retry.go:31] will retry after 3.266336566s: waiting for machine to come up
	I1114 15:54:30.024900  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.025024  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.041104  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.524842  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.524920  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.540643  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.025166  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.040723  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.525252  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.525364  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.537978  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.024495  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.024626  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.037625  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.524934  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.525053  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.540579  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.025237  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.025366  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.037675  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.524206  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.524300  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.537100  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.025150  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.039435  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.525030  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.525140  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.541014  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.957869  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.458285  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:35.458815  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.432677  876396 retry.go:31] will retry after 864.36813ms: kubelet not initialised
	I1114 15:54:34.302450  876396 retry.go:31] will retry after 2.833071739s: kubelet not initialised
	I1114 15:54:37.142128  876396 retry.go:31] will retry after 2.880672349s: kubelet not initialised
	I1114 15:54:35.386010  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:35.386483  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:35.386526  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:35.386417  877583 retry.go:31] will retry after 3.791360608s: waiting for machine to come up
	I1114 15:54:35.024814  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.024924  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.038035  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:35.524433  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.524540  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.538065  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.024585  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.024690  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.036540  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.525201  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.525293  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.537751  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.024292  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.024388  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.037480  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.525115  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.525234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.538365  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.025002  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:38.025148  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:38.036994  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.500770  876668 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:38.500813  876668 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:38.500860  876668 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:38.500951  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:38.538468  876668 cri.go:89] found id: ""
	I1114 15:54:38.538571  876668 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:38.554809  876668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:38.563961  876668 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:38.564025  876668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572905  876668 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572930  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:38.694403  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.614869  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.815977  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.914051  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:37.956992  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.957705  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.179165  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.179746  876065 main.go:141] libmachine: (no-preload-490998) Found IP for machine: 192.168.50.251
	I1114 15:54:39.179773  876065 main.go:141] libmachine: (no-preload-490998) Reserving static IP address...
	I1114 15:54:39.179792  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has current primary IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.180259  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.180295  876065 main.go:141] libmachine: (no-preload-490998) Reserved static IP address: 192.168.50.251
	I1114 15:54:39.180328  876065 main.go:141] libmachine: (no-preload-490998) DBG | skip adding static IP to network mk-no-preload-490998 - found existing host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"}
	I1114 15:54:39.180349  876065 main.go:141] libmachine: (no-preload-490998) DBG | Getting to WaitForSSH function...
	I1114 15:54:39.180368  876065 main.go:141] libmachine: (no-preload-490998) Waiting for SSH to be available...
	I1114 15:54:39.182637  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183005  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.183037  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183157  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH client type: external
	I1114 15:54:39.183185  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa (-rw-------)
	I1114 15:54:39.183218  876065 main.go:141] libmachine: (no-preload-490998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:39.183239  876065 main.go:141] libmachine: (no-preload-490998) DBG | About to run SSH command:
	I1114 15:54:39.183251  876065 main.go:141] libmachine: (no-preload-490998) DBG | exit 0
	I1114 15:54:39.276793  876065 main.go:141] libmachine: (no-preload-490998) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:39.277095  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetConfigRaw
	I1114 15:54:39.277799  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.281002  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281360  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.281393  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281696  876065 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/config.json ...
	I1114 15:54:39.281970  876065 machine.go:88] provisioning docker machine ...
	I1114 15:54:39.281997  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:39.282236  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282395  876065 buildroot.go:166] provisioning hostname "no-preload-490998"
	I1114 15:54:39.282416  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.285099  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285498  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.285527  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285695  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.285865  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286026  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286277  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.286523  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.286978  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.287007  876065 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-490998 && echo "no-preload-490998" | sudo tee /etc/hostname
	I1114 15:54:39.419452  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-490998
	
	I1114 15:54:39.419493  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.422544  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.422912  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.422951  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.423134  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.423360  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423591  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423756  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.423915  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.424324  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.424363  876065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-490998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-490998/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-490998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:39.552044  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:39.552085  876065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:39.552106  876065 buildroot.go:174] setting up certificates
	I1114 15:54:39.552118  876065 provision.go:83] configureAuth start
	I1114 15:54:39.552127  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.552438  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.555275  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555660  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.555771  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555936  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.558628  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559004  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.559042  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559181  876065 provision.go:138] copyHostCerts
	I1114 15:54:39.559247  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:39.559273  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:39.559337  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:39.559498  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:39.559512  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:39.559547  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:39.559612  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:39.559620  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:39.559644  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:39.559697  876065 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.no-preload-490998 san=[192.168.50.251 192.168.50.251 localhost 127.0.0.1 minikube no-preload-490998]
	I1114 15:54:39.728218  876065 provision.go:172] copyRemoteCerts
	I1114 15:54:39.728286  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:39.728314  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.731482  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.731920  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.731966  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.732138  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.732376  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.732605  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.732802  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:39.819537  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:39.848716  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 15:54:39.876339  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:54:39.917428  876065 provision.go:86] duration metric: configureAuth took 365.293803ms
	I1114 15:54:39.917461  876065 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:39.917686  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:39.917783  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.920823  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921417  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.921457  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921785  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.921989  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922170  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922351  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.922516  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.922992  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.923017  876065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:40.270821  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:40.270851  876065 machine.go:91] provisioned docker machine in 988.864728ms
	I1114 15:54:40.270865  876065 start.go:300] post-start starting for "no-preload-490998" (driver="kvm2")
	I1114 15:54:40.270878  876065 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:40.270910  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.271296  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:40.271331  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.274197  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274517  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.274547  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274784  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.275045  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.275209  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.275379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.363810  876065 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:40.368485  876065 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:40.368515  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:40.368599  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:40.368688  876065 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:40.368820  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:40.378691  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:40.401789  876065 start.go:303] post-start completed in 130.90895ms
	I1114 15:54:40.401816  876065 fix.go:56] fixHost completed within 19.734039545s
	I1114 15:54:40.401848  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.404413  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404791  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.404824  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404962  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.405212  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405442  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405614  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.405840  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:40.406318  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:40.406338  876065 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:40.521875  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977280.490539427
	
	I1114 15:54:40.521907  876065 fix.go:206] guest clock: 1699977280.490539427
	I1114 15:54:40.521917  876065 fix.go:219] Guest: 2023-11-14 15:54:40.490539427 +0000 UTC Remote: 2023-11-14 15:54:40.401821935 +0000 UTC m=+361.372113130 (delta=88.717492ms)
	I1114 15:54:40.521945  876065 fix.go:190] guest clock delta is within tolerance: 88.717492ms
	I1114 15:54:40.521952  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 19.854220019s
	I1114 15:54:40.521990  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.522294  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:40.525204  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525567  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.525611  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525786  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526412  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526589  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526682  876065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:40.526727  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.526847  876065 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:40.526881  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.529470  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529673  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529863  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.529895  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530047  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530189  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.530224  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530226  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530415  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530480  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530594  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530677  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.530726  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530881  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.634647  876065 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:40.641680  876065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:40.784919  876065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:40.791364  876065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:40.791466  876065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:40.814464  876065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:40.814496  876065 start.go:472] detecting cgroup driver to use...
	I1114 15:54:40.814608  876065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:40.834599  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:40.851666  876065 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:40.851761  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:40.870359  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:40.885345  876065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:41.042220  876065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:41.174015  876065 docker.go:219] disabling docker service ...
	I1114 15:54:41.174101  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:41.188849  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:41.201322  876065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:41.329124  876065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:41.456116  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:41.477162  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:41.497860  876065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:41.497932  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.509750  876065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:41.509843  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.521944  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.532916  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.545469  876065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:41.556976  876065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:41.567322  876065 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:41.567401  876065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:41.583043  876065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:41.593941  876065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:41.717384  876065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:41.907278  876065 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:41.907351  876065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:41.912763  876065 start.go:540] Will wait 60s for crictl version
	I1114 15:54:41.912843  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:41.917105  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:41.965326  876065 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:41.965418  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.016065  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.079721  876065 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:40.028538  876396 retry.go:31] will retry after 2.943912692s: kubelet not initialised
	I1114 15:54:42.081301  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:42.084358  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.084771  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:42.084805  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.085014  876065 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:42.089551  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:42.102676  876065 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:42.102730  876065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:42.145434  876065 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:42.145479  876065 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:42.145570  876065 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.145592  876065 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.145621  876065 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.145620  876065 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.145662  876065 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1114 15:54:42.145692  876065 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.145819  876065 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.145564  876065 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.147966  876065 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.147967  876065 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.148056  876065 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1114 15:54:42.147970  876065 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.148093  876065 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.147960  876065 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.318368  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1114 15:54:42.318578  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.325647  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.340363  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.375378  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.473131  876065 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1114 15:54:42.473195  876065 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.473202  876065 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1114 15:54:42.473235  876065 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.473253  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.473283  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.511600  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.554432  876065 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1114 15:54:42.554502  876065 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1114 15:54:42.554572  876065 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.554599  876065 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1114 15:54:42.554618  876065 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.554632  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554657  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554532  876065 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.554724  876065 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1114 15:54:42.554750  876065 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.554776  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554778  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554907  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.554969  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.576922  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.577004  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.577114  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.577535  876065 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1114 15:54:42.577591  876065 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.577631  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.655186  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.655318  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1114 15:54:42.655449  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1114 15:54:42.655473  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:42.655536  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.706186  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1114 15:54:42.706257  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.706283  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1114 15:54:42.706304  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:42.706372  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:42.706408  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1114 15:54:42.706548  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:42.737003  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1114 15:54:42.737032  876065 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737093  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737102  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1114 15:54:42.737179  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1114 15:54:42.737237  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:42.769211  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1114 15:54:42.769251  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1114 15:54:42.769304  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1114 15:54:42.769289  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1114 15:54:42.769428  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:54:44.006164  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.268897316s)
	I1114 15:54:44.006206  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1114 15:54:44.006240  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.236783751s)
	I1114 15:54:44.006275  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1114 15:54:44.006283  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.269163879s)
	I1114 15:54:44.006297  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1114 15:54:44.006322  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:44.006375  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:40.016931  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:40.017030  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.030798  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.541996  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.042023  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.542537  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.042880  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.542514  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.577021  876668 api_server.go:72] duration metric: took 2.560093027s to wait for apiserver process to appear ...
	I1114 15:54:42.577059  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:42.577088  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.577767  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:42.577805  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.578225  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:43.078953  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.457425  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:44.460290  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:42.978588  876396 retry.go:31] will retry after 5.776997827s: kubelet not initialised
	I1114 15:54:46.326192  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.326231  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.326249  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.390609  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.390668  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.579140  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.590569  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:46.590606  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.079186  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.084460  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.084483  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.578774  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.588878  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.588919  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:48.079047  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:48.084809  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:54:48.098877  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:48.098941  876668 api_server.go:131] duration metric: took 5.521873886s to wait for apiserver health ...
	I1114 15:54:48.098955  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:48.098972  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:48.101010  876668 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:47.219243  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.212835904s)
	I1114 15:54:47.219281  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1114 15:54:47.219308  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:47.219472  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:48.102440  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:48.154163  876668 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:48.212336  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:48.229819  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:48.229862  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:48.229874  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:48.229886  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:48.229896  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:48.229905  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:48.229913  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:48.229923  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:48.229934  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:48.229944  876668 system_pods.go:74] duration metric: took 17.577706ms to wait for pod list to return data ...
	I1114 15:54:48.229961  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:48.236002  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:48.236043  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:48.236057  876668 node_conditions.go:105] duration metric: took 6.089691ms to run NodePressure ...
	I1114 15:54:48.236093  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:48.608191  876668 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622192  876668 kubeadm.go:787] kubelet initialised
	I1114 15:54:48.622221  876668 kubeadm.go:788] duration metric: took 13.999979ms waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622232  876668 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:48.629670  876668 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.636566  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636594  876668 pod_ready.go:81] duration metric: took 6.892422ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.636611  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636619  876668 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.643982  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644013  876668 pod_ready.go:81] duration metric: took 7.383826ms waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.644030  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644037  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.649791  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649815  876668 pod_ready.go:81] duration metric: took 5.769971ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.649825  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649833  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.655071  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655100  876668 pod_ready.go:81] duration metric: took 5.259243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.655113  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655121  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.018817  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018849  876668 pod_ready.go:81] duration metric: took 363.719341ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.018863  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018872  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.417556  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417588  876668 pod_ready.go:81] duration metric: took 398.704259ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.417600  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417607  876668 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.816654  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816692  876668 pod_ready.go:81] duration metric: took 399.075859ms waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.816712  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816721  876668 pod_ready.go:38] duration metric: took 1.194471296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:49.816765  876668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:54:49.830335  876668 ops.go:34] apiserver oom_adj: -16
	I1114 15:54:49.830363  876668 kubeadm.go:640] restartCluster took 21.356528166s
	I1114 15:54:49.830372  876668 kubeadm.go:406] StartCluster complete in 21.422955285s
	I1114 15:54:49.830390  876668 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.830502  876668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:54:49.832470  876668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.859435  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:54:49.859707  876668 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:54:49.859810  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:49.859852  876668 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859873  876668 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859885  876668 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-529430"
	I1114 15:54:49.859892  876668 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-529430"
	W1114 15:54:49.859895  876668 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:54:49.859954  876668 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859973  876668 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.859981  876668 addons.go:240] addon metrics-server should already be in state true
	I1114 15:54:49.860025  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.859956  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.860306  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860345  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860438  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860452  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860489  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860491  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.866006  876668 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-529430" context rescaled to 1 replicas
	I1114 15:54:49.866053  876668 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:54:49.878650  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I1114 15:54:49.878976  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I1114 15:54:49.879627  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I1114 15:54:49.891649  876668 out.go:177] * Verifying Kubernetes components...
	I1114 15:54:49.893450  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:54:49.892232  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892275  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892329  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.894259  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894282  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894473  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894486  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894610  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894623  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894687  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894892  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.894952  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894993  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.895598  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.895642  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.896296  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.896321  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.899095  876668 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.899120  876668 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:54:49.899151  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.899576  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.899622  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.917834  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I1114 15:54:49.917842  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I1114 15:54:49.918442  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.918505  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.919007  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919026  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919167  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919187  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919493  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919562  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919803  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.920191  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.920237  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.922764  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1114 15:54:49.922969  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.924925  876668 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:49.923380  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.926603  876668 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:49.926625  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:54:49.926647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.927991  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.928012  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.928459  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.928683  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.930696  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.930740  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.931131  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.931154  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.931330  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.931491  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.931647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.931775  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.934128  876668 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:54:49.936007  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:54:49.936031  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:54:49.936056  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.939725  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.939782  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I1114 15:54:49.940336  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.940442  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.940467  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.940822  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.941060  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.941093  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.941095  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.941211  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.941388  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.941856  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.942057  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.943639  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.943972  876668 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:49.943991  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:54:49.944009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.947172  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947631  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.947663  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947902  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.948102  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.948278  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.948579  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:46.955010  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:48.955172  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:50.066801  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:50.084526  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:54:50.084555  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:54:50.145315  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:50.145671  876668 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:50.146084  876668 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 15:54:50.151627  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:54:50.151646  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:54:50.216318  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:50.216349  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:54:50.316434  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:51.787528  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.642164298s)
	I1114 15:54:51.787644  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787672  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.787695  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.720847981s)
	I1114 15:54:51.787744  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788039  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788064  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788075  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788086  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788094  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788109  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788119  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790294  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790322  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.790327  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790349  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.803844  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.803875  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.804205  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.804238  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.804239  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.925929  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.609443677s)
	I1114 15:54:51.926001  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926019  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926385  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926429  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.926456  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926468  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926483  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926795  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926814  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926826  876668 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-529430"
	I1114 15:54:51.926829  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:52.146969  876668 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1114 15:54:48.761692  876396 retry.go:31] will retry after 7.067385779s: kubelet not initialised
	I1114 15:54:50.000157  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.780649338s)
	I1114 15:54:50.000194  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1114 15:54:50.000227  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:50.000281  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:52.291215  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (2.290903759s)
	I1114 15:54:52.291244  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1114 15:54:52.291271  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:52.291312  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:53.739008  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.447671823s)
	I1114 15:54:53.739041  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1114 15:54:53.739066  876065 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:53.739126  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:52.194351  876668 addons.go:502] enable addons completed in 2.33463136s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1114 15:54:52.220203  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:54.220773  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:50.957159  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:53.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.458026  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.834422  876396 retry.go:31] will retry after 18.847542128s: kubelet not initialised
	I1114 15:54:56.221753  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:56.720961  876668 node_ready.go:49] node "default-k8s-diff-port-529430" has status "Ready":"True"
	I1114 15:54:56.720989  876668 node_ready.go:38] duration metric: took 6.575288694s waiting for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:56.721001  876668 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:56.730382  876668 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736722  876668 pod_ready.go:92] pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:56.736761  876668 pod_ready.go:81] duration metric: took 6.345209ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736774  876668 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:58.773825  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:57.458580  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:59.956188  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:01.061681  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.322513643s)
	I1114 15:55:01.061716  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1114 15:55:01.061753  876065 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.061812  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.811277  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1114 15:55:01.811342  876065 cache_images.go:123] Successfully loaded all cached images
	I1114 15:55:01.811352  876065 cache_images.go:92] LoadImages completed in 19.665858366s
	I1114 15:55:01.811461  876065 ssh_runner.go:195] Run: crio config
	I1114 15:55:01.881576  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:01.881603  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:01.881622  876065 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:55:01.881646  876065 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-490998 NodeName:no-preload-490998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:55:01.881781  876065 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-490998"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:55:01.881859  876065 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-490998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:55:01.881918  876065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:55:01.892613  876065 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:55:01.892696  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:55:01.902267  876065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1114 15:55:01.919728  876065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:55:01.936188  876065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1114 15:55:01.954510  876065 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I1114 15:55:01.958337  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:55:01.970290  876065 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998 for IP: 192.168.50.251
	I1114 15:55:01.970328  876065 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:55:01.970513  876065 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:55:01.970563  876065 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:55:01.970662  876065 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/client.key
	I1114 15:55:01.970794  876065 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key.6b358a63
	I1114 15:55:01.970857  876065 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key
	I1114 15:55:01.971003  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:55:01.971065  876065 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:55:01.971079  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:55:01.971123  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:55:01.971160  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:55:01.971192  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:55:01.971252  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:55:01.972129  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:55:01.996012  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:55:02.020778  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:55:02.044395  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:55:02.066866  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:55:02.089331  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:55:02.113148  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:55:02.136083  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:55:02.157833  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:55:02.181150  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:55:02.203155  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:55:02.225839  876065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:55:02.243335  876065 ssh_runner.go:195] Run: openssl version
	I1114 15:55:02.249465  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:55:02.259874  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264340  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264401  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.270441  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:55:02.282031  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:55:02.293297  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298093  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298155  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.303668  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:55:02.315423  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:55:02.325976  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332124  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332194  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.339377  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:55:02.350318  876065 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:55:02.354796  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:55:02.360867  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:55:02.366306  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:55:02.372186  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:55:02.377900  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:55:02.383519  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:55:02.389128  876065 kubeadm.go:404] StartCluster: {Name:no-preload-490998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:55:02.389229  876065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:55:02.389304  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:02.428473  876065 cri.go:89] found id: ""
	I1114 15:55:02.428578  876065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:55:02.439944  876065 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:55:02.439969  876065 kubeadm.go:636] restartCluster start
	I1114 15:55:02.440079  876065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:55:02.450025  876065 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.451533  876065 kubeconfig.go:92] found "no-preload-490998" server: "https://192.168.50.251:8443"
	I1114 15:55:02.454290  876065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:55:02.463352  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.463410  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.474007  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.474025  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.474065  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.484826  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.985519  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.985595  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.998224  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.485905  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.486059  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.499281  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.985805  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.985925  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.998086  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:00.819591  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:02.773550  876668 pod_ready.go:92] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.773573  876668 pod_ready.go:81] duration metric: took 6.036790568s waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.773582  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778746  876668 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.778764  876668 pod_ready.go:81] duration metric: took 5.176465ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778772  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784332  876668 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.784353  876668 pod_ready.go:81] duration metric: took 5.572815ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784366  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789492  876668 pod_ready.go:92] pod "kube-proxy-zpchs" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.789514  876668 pod_ready.go:81] duration metric: took 5.139759ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789524  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796606  876668 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.796628  876668 pod_ready.go:81] duration metric: took 7.097079ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796639  876668 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.454894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.956449  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.485284  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.485387  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.498240  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:04.985846  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.985936  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.998901  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.485250  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.485365  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.497261  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.985411  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.985511  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.997656  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.485227  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.485332  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.497310  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.985893  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.985977  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.997585  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.485903  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.486001  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.498532  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.985881  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.985958  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.997898  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.485400  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.485512  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.497446  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.985912  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.986015  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.998121  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.081742  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:07.082515  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.580987  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:06.957307  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.455227  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.485641  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.485735  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.498347  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:09.985970  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.986073  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.997958  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.485503  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.485600  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.497407  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.985577  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.985655  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.998624  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.485146  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.485250  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.497837  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.985423  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.985551  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.997959  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:12.464381  876065 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:55:12.464449  876065 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:55:12.464478  876065 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:55:12.464582  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:12.505435  876065 cri.go:89] found id: ""
	I1114 15:55:12.505532  876065 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:55:12.522470  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:55:12.532890  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:55:12.532982  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542115  876065 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542141  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:12.684875  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:13.897464  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.21254145s)
	I1114 15:55:13.897509  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:11.582332  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.085102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:11.955438  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.455506  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.687822  876396 kubeadm.go:787] kubelet initialised
	I1114 15:55:14.687849  876396 kubeadm.go:788] duration metric: took 43.622781532s waiting for restarted kubelet to initialise ...
	I1114 15:55:14.687857  876396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:14.693560  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698796  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.698819  876396 pod_ready.go:81] duration metric: took 5.232669ms waiting for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698828  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703879  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.703903  876396 pod_ready.go:81] duration metric: took 5.067006ms waiting for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703916  876396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708064  876396 pod_ready.go:92] pod "etcd-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.708093  876396 pod_ready.go:81] duration metric: took 4.168333ms waiting for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708106  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713030  876396 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.713055  876396 pod_ready.go:81] duration metric: took 4.939899ms waiting for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713067  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087824  876396 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.087857  876396 pod_ready.go:81] duration metric: took 374.780312ms waiting for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087873  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.486984  876396 pod_ready.go:92] pod "kube-proxy-kw2ns" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.487011  876396 pod_ready.go:81] duration metric: took 399.130772ms waiting for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.487020  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886624  876396 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.886658  876396 pod_ready.go:81] duration metric: took 399.628757ms waiting for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886671  876396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.096314  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.174495  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.254647  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:55:14.254765  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.273596  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.788350  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.288506  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.788580  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.288476  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.787853  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.816380  876065 api_server.go:72] duration metric: took 2.561735945s to wait for apiserver process to appear ...
	I1114 15:55:16.816408  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:55:16.816428  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:16.582309  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:18.584599  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:16.957605  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:19.457613  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.541438  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.541473  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:20.541490  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:20.582790  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.582838  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:21.083891  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.089625  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.089658  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:21.583184  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.599539  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.599576  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:22.083098  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:22.088480  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 15:55:22.096517  876065 api_server.go:141] control plane version: v1.28.3
	I1114 15:55:22.096545  876065 api_server.go:131] duration metric: took 5.280130119s to wait for apiserver health ...
	I1114 15:55:22.096558  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:22.096568  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:22.098612  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:55:18.194723  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.195126  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.196472  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.100184  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:55:22.125049  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:55:22.150893  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:55:22.163922  876065 system_pods.go:59] 8 kube-system pods found
	I1114 15:55:22.163958  876065 system_pods.go:61] "coredns-5dd5756b68-n77fz" [e2f5ce73-a65e-40da-b554-c929f093a1a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:55:22.163970  876065 system_pods.go:61] "etcd-no-preload-490998" [01e272b5-4463-431d-8ed1-f561a90b667d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:55:22.163983  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [529f79fd-eae5-44e9-971d-b3ecb5ed025d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:55:22.163989  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ea299234-2456-4171-bac0-8e8ff4998596] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:55:22.163994  876065 system_pods.go:61] "kube-proxy-6hqk5" [7233dd72-138c-4148-834b-2dcb83a4cf00] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:55:22.163999  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [666e8a03-50b1-4b08-84f3-c3c6ec8a5452] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:55:22.164005  876065 system_pods.go:61] "metrics-server-57f55c9bc5-6lg6h" [7afa1e38-c64c-4d03-9b00-5765e7e251ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:55:22.164036  876065 system_pods.go:61] "storage-provisioner" [1090ed8a-6424-4980-9ea7-b43e998d1eb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:55:22.164050  876065 system_pods.go:74] duration metric: took 13.132475ms to wait for pod list to return data ...
	I1114 15:55:22.164058  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:55:22.167930  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:55:22.168020  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 15:55:22.168033  876065 node_conditions.go:105] duration metric: took 3.969303ms to run NodePressure ...
	I1114 15:55:22.168055  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:22.456975  876065 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470174  876065 kubeadm.go:787] kubelet initialised
	I1114 15:55:22.470202  876065 kubeadm.go:788] duration metric: took 13.201285ms waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470216  876065 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:22.483150  876065 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:21.081628  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:23.083015  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:21.955808  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.455829  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.696004  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.195514  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.514847  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.519442  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.013526  876065 pod_ready.go:92] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:27.013584  876065 pod_ready.go:81] duration metric: took 4.530407487s waiting for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:27.013600  876065 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:29.032979  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:25.582366  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.080716  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.456123  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.955087  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:29.694646  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.194401  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:31.033810  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:33.033026  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.033058  876065 pod_ready.go:81] duration metric: took 6.019448696s waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.033071  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039148  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.039180  876065 pod_ready.go:81] duration metric: took 6.099138ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039194  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049651  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.049675  876065 pod_ready.go:81] duration metric: took 10.473938ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049685  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061928  876065 pod_ready.go:92] pod "kube-proxy-6hqk5" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.061971  876065 pod_ready.go:81] duration metric: took 12.277038ms waiting for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061984  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071422  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.071452  876065 pod_ready.go:81] duration metric: took 9.456301ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071465  876065 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:30.081625  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.082675  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.581547  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:30.955154  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.957772  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.454775  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.194959  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:36.195495  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.339391  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.340404  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.083295  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.584210  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.956659  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:38.696669  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.194485  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.838699  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.840605  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.081223  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.081468  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.454630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.455871  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:43.195172  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:45.195687  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.339878  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.838910  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.841677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.082382  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.582248  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.457525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.955133  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:47.695467  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.195263  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.339284  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.340315  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.082546  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.581238  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.955630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.454502  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.455395  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:52.694030  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:54.694593  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:56.695136  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.838685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.838864  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.581986  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.582037  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.582635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.955377  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.963166  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:01.195573  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.840578  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.338828  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.082323  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.582531  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.454214  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.454975  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:03.198457  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:05.694675  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.339632  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.340001  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.840358  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:07.082081  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:09.582483  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.455257  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.455373  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.457344  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.196641  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.693989  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.339845  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:13.839805  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.583615  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:14.083682  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.957092  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.456347  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.694792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.200049  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.339768  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:18.839853  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.583278  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:19.081994  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.954665  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.454724  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.697859  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.194201  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.194738  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.840457  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.339880  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:21.082759  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.581646  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.457299  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.954029  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.694448  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.696563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:25.342126  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:27.839304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.083724  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:28.582086  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.955572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.459642  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.194785  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.693765  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:30.339130  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:32.339361  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.083363  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.582213  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.955312  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.955576  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.694783  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:34.339538  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.839469  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.842444  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.081206  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.581263  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.457091  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.956262  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.195134  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:40.195875  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.343304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.839634  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.080021  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.081543  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.453768  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.455182  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.457368  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:42.694667  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.195018  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.197081  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:46.338815  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:48.339683  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.083139  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.582320  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.954718  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.455135  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:49.696028  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.194484  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.340708  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.845026  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.082635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.583485  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.455840  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.955079  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.194627  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.197158  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.338956  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.339983  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.081903  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.583102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.955380  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.956134  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.695165  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.196563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:59.340299  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.838688  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.839025  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:00.080983  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:02.582197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:04.583222  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.454473  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.455187  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.455628  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.694518  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.695324  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.839239  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.341567  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.081736  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.581889  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.954781  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.954913  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.194118  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.194688  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:12.195198  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.840317  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.582436  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.583580  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.955894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.459525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.195588  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.195922  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:15.339470  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:17.340059  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.081770  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.082006  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.954957  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.455211  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.695530  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.193801  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.839618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.839819  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:20.083348  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:22.581010  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.582114  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.958579  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.454848  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:23.196520  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:25.196779  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.339942  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.340928  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.841122  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.583453  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:29.082667  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.455784  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.954086  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:27.695279  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.194416  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.341608  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.343898  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.581417  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.583852  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.955148  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.455525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:32.693640  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:34.695191  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.194999  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.838294  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.838948  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:36.082181  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.582488  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.955108  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.454392  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:40.455291  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.195193  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.694849  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.839180  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.339359  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.081697  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:43.081876  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.455905  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.962584  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.194494  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:46.195239  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.840607  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.338846  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:45.582002  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.083197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.454539  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.455025  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.694661  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.695232  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.840392  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.580410  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.580961  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.581502  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:51.954903  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.454053  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:53.194450  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:55.196537  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.339997  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.839677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.080798  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.087078  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.454639  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:58.955200  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.696210  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:00.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:02.194961  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.339152  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.340037  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.838551  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.582808  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.084331  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.458365  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.955679  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.696770  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:07.195364  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:05.840151  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.340709  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.582153  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.083260  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.454599  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.458281  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.196674  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.696022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.839588  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.342479  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.583479  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:14.081451  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.954623  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.455233  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.147383  876220 pod_ready.go:81] duration metric: took 4m0.000589332s waiting for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	E1114 15:58:15.147416  876220 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:58:15.147446  876220 pod_ready.go:38] duration metric: took 4m11.626263996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:15.147477  876220 kubeadm.go:640] restartCluster took 4m32.524775831s
	W1114 15:58:15.147587  876220 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:58:15.147630  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:58:14.195824  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.696055  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.841115  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.341347  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.084839  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.582575  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.696792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:20.838749  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:22.840049  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.080598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.081173  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.694974  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:26.196317  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.340015  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.839312  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.081700  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.582564  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.582728  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.037182  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.889530708s)
	I1114 15:58:29.037253  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:29.052797  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:58:29.061624  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:58:29.070799  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:58:29.070848  876220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:58:29.303905  876220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:58:28.695122  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.696046  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.341383  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:32.341988  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:31.584191  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.082795  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:33.195568  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:35.695145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.839094  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.840873  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.086791  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:38.581233  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.234828  876220 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:58:40.234881  876220 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:58:40.234965  876220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:58:40.235127  876220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:58:40.235264  876220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:58:40.235361  876220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:58:40.237159  876220 out.go:204]   - Generating certificates and keys ...
	I1114 15:58:40.237276  876220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:58:40.237366  876220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:58:40.237511  876220 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:58:40.237608  876220 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:58:40.237697  876220 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:58:40.237791  876220 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:58:40.237883  876220 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:58:40.237975  876220 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:58:40.238066  876220 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:58:40.238161  876220 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:58:40.238213  876220 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:58:40.238283  876220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:58:40.238352  876220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:58:40.238422  876220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:58:40.238506  876220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:58:40.238582  876220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:58:40.238725  876220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:58:40.238816  876220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:58:40.240266  876220 out.go:204]   - Booting up control plane ...
	I1114 15:58:40.240404  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:58:40.240508  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:58:40.240593  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:58:40.240822  876220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:58:40.240958  876220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:58:40.241018  876220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:58:40.241226  876220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:58:40.241333  876220 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.509675 seconds
	I1114 15:58:40.241470  876220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:58:40.241658  876220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:58:40.241744  876220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:58:40.241979  876220 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-279880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:58:40.242054  876220 kubeadm.go:322] [bootstrap-token] Using token: 2hujph.0fcw82xd7gxidhsk
	I1114 15:58:40.243677  876220 out.go:204]   - Configuring RBAC rules ...
	I1114 15:58:40.243823  876220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:58:40.243942  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:58:40.244131  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:58:40.244252  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:58:40.244351  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:58:40.244464  876220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:58:40.244616  876220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:58:40.244673  876220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:58:40.244732  876220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:58:40.244762  876220 kubeadm.go:322] 
	I1114 15:58:40.244828  876220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:58:40.244835  876220 kubeadm.go:322] 
	I1114 15:58:40.244904  876220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:58:40.244913  876220 kubeadm.go:322] 
	I1114 15:58:40.244934  876220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:58:40.244982  876220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:58:40.245027  876220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:58:40.245033  876220 kubeadm.go:322] 
	I1114 15:58:40.245108  876220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:58:40.245128  876220 kubeadm.go:322] 
	I1114 15:58:40.245185  876220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:58:40.245195  876220 kubeadm.go:322] 
	I1114 15:58:40.245269  876220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:58:40.245376  876220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:58:40.245483  876220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:58:40.245493  876220 kubeadm.go:322] 
	I1114 15:58:40.245606  876220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:58:40.245700  876220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:58:40.245708  876220 kubeadm.go:322] 
	I1114 15:58:40.245828  876220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.245986  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:58:40.246023  876220 kubeadm.go:322] 	--control-plane 
	I1114 15:58:40.246036  876220 kubeadm.go:322] 
	I1114 15:58:40.246148  876220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:58:40.246158  876220 kubeadm.go:322] 
	I1114 15:58:40.246247  876220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.246364  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:58:40.246386  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:58:40.246394  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:58:40.248160  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:58:40.249669  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:58:40.299570  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:58:40.399662  876220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:58:40.399751  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=embed-certs-279880 minikube.k8s.io/updated_at=2023_11_14T15_58_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.399759  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.456044  876220 ops.go:34] apiserver oom_adj: -16
	I1114 15:58:40.674206  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.780887  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:37.695540  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.195681  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:39.338902  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.339264  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.339844  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.582722  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.082401  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.391744  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:41.892060  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.392311  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.892385  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.391523  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.892286  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.392103  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.891494  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:45.392324  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.695415  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.195275  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.842259  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.339758  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.582481  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.079990  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.891330  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.391723  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.892283  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.391436  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.891664  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.392116  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.892052  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.391957  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.892316  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:50.391756  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.696088  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.195252  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.195695  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.891614  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.391818  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.891371  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.391565  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.544346  876220 kubeadm.go:1081] duration metric: took 12.144659895s to wait for elevateKubeSystemPrivileges.
	I1114 15:58:52.544391  876220 kubeadm.go:406] StartCluster complete in 5m9.978264522s
	I1114 15:58:52.544428  876220 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.544541  876220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:58:52.547345  876220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.547635  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:58:52.547785  876220 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:58:52.547873  876220 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-279880"
	I1114 15:58:52.547886  876220 addons.go:69] Setting default-storageclass=true in profile "embed-certs-279880"
	I1114 15:58:52.547903  876220 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-279880"
	I1114 15:58:52.547907  876220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-279880"
	W1114 15:58:52.547915  876220 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:58:52.547951  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:58:52.547986  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548010  876220 addons.go:69] Setting metrics-server=true in profile "embed-certs-279880"
	I1114 15:58:52.548027  876220 addons.go:231] Setting addon metrics-server=true in "embed-certs-279880"
	W1114 15:58:52.548038  876220 addons.go:240] addon metrics-server should already be in state true
	I1114 15:58:52.548083  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548508  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548612  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548844  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.568396  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I1114 15:58:52.568429  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I1114 15:58:52.568402  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I1114 15:58:52.569005  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569019  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569009  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569581  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569612  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.569772  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569796  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.570042  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570183  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.570699  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.570718  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.570742  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.570723  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.571364  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.571943  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.571975  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.575936  876220 addons.go:231] Setting addon default-storageclass=true in "embed-certs-279880"
	W1114 15:58:52.575961  876220 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:58:52.575996  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.576368  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.576412  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.588007  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I1114 15:58:52.588767  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.589487  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.589505  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.589943  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.590164  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.591841  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I1114 15:58:52.592269  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.592610  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.594453  876220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:58:52.593100  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.594839  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I1114 15:58:52.595836  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:58:52.595856  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:58:52.595874  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.595879  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.596356  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.596654  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.596683  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.597179  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.597199  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.597596  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.598225  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.598250  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.598972  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599389  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.599412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599655  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.599823  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.599971  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.600085  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.601301  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.603202  876220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:58:52.604691  876220 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.604701  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:58:52.604714  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.607585  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.607911  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.607942  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.608138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.608309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.608450  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.608586  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.614716  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I1114 15:58:52.615047  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.615462  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.615503  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.615849  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.616006  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.617386  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.617630  876220 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.617647  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:58:52.617666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.620337  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620656  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.620700  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620951  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.621103  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.621252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.621374  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.636800  876220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-279880" context rescaled to 1 replicas
	I1114 15:58:52.636844  876220 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:58:52.638665  876220 out.go:177] * Verifying Kubernetes components...
	I1114 15:58:50.340524  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.341233  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.080611  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.081851  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.582577  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.640094  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:52.829938  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.840140  876220 node_ready.go:35] waiting up to 6m0s for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.840653  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:58:52.858164  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.877415  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:58:52.877448  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:58:52.900588  876220 node_ready.go:49] node "embed-certs-279880" has status "Ready":"True"
	I1114 15:58:52.900614  876220 node_ready.go:38] duration metric: took 60.432125ms waiting for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.900624  876220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:52.972955  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:53.009532  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:58:53.009564  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:58:53.064247  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:53.064283  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:58:53.168472  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:54.543952  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.713966912s)
	I1114 15:58:54.544016  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544029  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544312  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544332  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.544343  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544374  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544650  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544697  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.569577  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.728879408s)
	I1114 15:58:54.569603  876220 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 15:58:54.572090  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.572118  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.572396  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.572420  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063126  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.20491351s)
	I1114 15:58:55.063197  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063218  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063551  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063572  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063583  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063596  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063609  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.063888  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063910  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.228754  876220 pod_ready.go:102] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:55.671980  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.503435235s)
	I1114 15:58:55.672050  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672066  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672415  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672481  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672502  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672514  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.672777  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672795  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672807  876220 addons.go:467] Verifying addon metrics-server=true in "embed-certs-279880"
	I1114 15:58:55.674712  876220 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:58:55.676182  876220 addons.go:502] enable addons completed in 3.128402943s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:58:54.695084  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.696106  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.844023  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:57.338618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.660605  876220 pod_ready.go:92] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.660642  876220 pod_ready.go:81] duration metric: took 3.687643856s waiting for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.660659  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671773  876220 pod_ready.go:92] pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.671803  876220 pod_ready.go:81] duration metric: took 11.134131ms waiting for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671817  876220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679179  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.679212  876220 pod_ready.go:81] duration metric: took 7.385218ms waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679224  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691696  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.691721  876220 pod_ready.go:81] duration metric: took 12.488161ms waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691734  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704134  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.704153  876220 pod_ready.go:81] duration metric: took 12.411686ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704161  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950181  876220 pod_ready.go:92] pod "kube-proxy-qdppd" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:57.950213  876220 pod_ready.go:81] duration metric: took 1.246044532s waiting for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950226  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237122  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:58.237150  876220 pod_ready.go:81] duration metric: took 286.915812ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237158  876220 pod_ready.go:38] duration metric: took 5.336525686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:58.237177  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:58:58.237227  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:58:58.260115  876220 api_server.go:72] duration metric: took 5.623228202s to wait for apiserver process to appear ...
	I1114 15:58:58.260147  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:58:58.260169  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:58:58.265361  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:58:58.266889  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:58:58.266918  876220 api_server.go:131] duration metric: took 6.76351ms to wait for apiserver health ...
	I1114 15:58:58.266938  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:58:58.439329  876220 system_pods.go:59] 9 kube-system pods found
	I1114 15:58:58.439362  876220 system_pods.go:61] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.439367  876220 system_pods.go:61] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.439371  876220 system_pods.go:61] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.439375  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.439379  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.439383  876220 system_pods.go:61] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.439387  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.439395  876220 system_pods.go:61] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.439402  876220 system_pods.go:61] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.439412  876220 system_pods.go:74] duration metric: took 172.465662ms to wait for pod list to return data ...
	I1114 15:58:58.439426  876220 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:58:58.637240  876220 default_sa.go:45] found service account: "default"
	I1114 15:58:58.637269  876220 default_sa.go:55] duration metric: took 197.834816ms for default service account to be created ...
	I1114 15:58:58.637278  876220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:58:58.840945  876220 system_pods.go:86] 9 kube-system pods found
	I1114 15:58:58.840976  876220 system_pods.go:89] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.840984  876220 system_pods.go:89] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.840990  876220 system_pods.go:89] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.840996  876220 system_pods.go:89] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.841001  876220 system_pods.go:89] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.841008  876220 system_pods.go:89] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.841014  876220 system_pods.go:89] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.841024  876220 system_pods.go:89] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.841032  876220 system_pods.go:89] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.841046  876220 system_pods.go:126] duration metric: took 203.761925ms to wait for k8s-apps to be running ...
	I1114 15:58:58.841058  876220 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:58:58.841143  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:58.857376  876220 system_svc.go:56] duration metric: took 16.307402ms WaitForService to wait for kubelet.
	I1114 15:58:58.857414  876220 kubeadm.go:581] duration metric: took 6.220529321s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:58:58.857439  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:58:59.036083  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:58:59.036112  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:58:59.036123  876220 node_conditions.go:105] duration metric: took 178.67985ms to run NodePressure ...
	I1114 15:58:59.036136  876220 start.go:228] waiting for startup goroutines ...
	I1114 15:58:59.036142  876220 start.go:233] waiting for cluster config update ...
	I1114 15:58:59.036152  876220 start.go:242] writing updated cluster config ...
	I1114 15:58:59.036464  876220 ssh_runner.go:195] Run: rm -f paused
	I1114 15:58:59.092065  876220 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:58:59.093827  876220 out.go:177] * Done! kubectl is now configured to use "embed-certs-279880" cluster and "default" namespace by default
	I1114 15:58:57.082065  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.082525  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:58.696271  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.195205  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.339863  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.839918  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.582598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:02.796920  876668 pod_ready.go:81] duration metric: took 4m0.000259164s waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:02.796965  876668 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:02.796978  876668 pod_ready.go:38] duration metric: took 4m6.075965552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:02.796999  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:02.797042  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:02.797123  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:02.851170  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:02.851199  876668 cri.go:89] found id: ""
	I1114 15:59:02.851210  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:02.851271  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.857251  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:02.857323  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:02.904914  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:02.904939  876668 cri.go:89] found id: ""
	I1114 15:59:02.904947  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:02.904994  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.909276  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:02.909350  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:02.944708  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:02.944778  876668 cri.go:89] found id: ""
	I1114 15:59:02.944789  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:02.944856  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.949260  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:02.949334  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:02.986830  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:02.986858  876668 cri.go:89] found id: ""
	I1114 15:59:02.986868  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:02.986928  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.991432  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:02.991511  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:03.028072  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.028101  876668 cri.go:89] found id: ""
	I1114 15:59:03.028113  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:03.028177  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.032678  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:03.032771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:03.070651  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.070671  876668 cri.go:89] found id: ""
	I1114 15:59:03.070679  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:03.070727  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.075127  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:03.075192  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:03.117191  876668 cri.go:89] found id: ""
	I1114 15:59:03.117221  876668 logs.go:284] 0 containers: []
	W1114 15:59:03.117229  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:03.117235  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:03.117300  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:03.163227  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.163255  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:03.163260  876668 cri.go:89] found id: ""
	I1114 15:59:03.163269  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:03.163322  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.167410  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.171362  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:03.171389  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:03.330078  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:03.330113  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:03.372318  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:03.372349  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.414474  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:03.414506  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.471989  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:03.472025  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.516802  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:03.516834  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:03.532186  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:03.532218  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:03.987984  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:03.988029  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:04.045261  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:04.045305  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:04.095816  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:04.095853  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:04.148084  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:04.148132  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:04.200992  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:04.201039  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:04.239171  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:04.239207  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:03.695077  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.194941  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:04.339648  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.839045  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:08.841546  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.787847  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:06.808020  876668 api_server.go:72] duration metric: took 4m16.941929205s to wait for apiserver process to appear ...
	I1114 15:59:06.808052  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:06.808087  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:06.808146  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:06.849716  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:06.849747  876668 cri.go:89] found id: ""
	I1114 15:59:06.849758  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:06.849816  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.854025  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:06.854093  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:06.894331  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:06.894361  876668 cri.go:89] found id: ""
	I1114 15:59:06.894371  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:06.894430  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.899047  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:06.899137  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:06.947156  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:06.947194  876668 cri.go:89] found id: ""
	I1114 15:59:06.947206  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:06.947279  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.952972  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:06.953045  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:06.997872  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:06.997899  876668 cri.go:89] found id: ""
	I1114 15:59:06.997910  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:06.997972  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.002282  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:07.002362  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:07.041689  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.041722  876668 cri.go:89] found id: ""
	I1114 15:59:07.041734  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:07.041800  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.045730  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:07.045797  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:07.091996  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.092021  876668 cri.go:89] found id: ""
	I1114 15:59:07.092032  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:07.092094  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.100690  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:07.100771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:07.141635  876668 cri.go:89] found id: ""
	I1114 15:59:07.141670  876668 logs.go:284] 0 containers: []
	W1114 15:59:07.141681  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:07.141689  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:07.141750  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:07.184807  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:07.184839  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.184847  876668 cri.go:89] found id: ""
	I1114 15:59:07.184857  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:07.184920  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.189361  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.197666  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:07.197694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:07.243532  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:07.243568  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:07.284479  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:07.284520  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.326309  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:07.326341  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:07.794035  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:07.794077  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.836008  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:07.836050  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:07.886157  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:07.886192  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:07.930752  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:07.930795  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.983727  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:07.983765  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:08.024969  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:08.025000  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:08.079050  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:08.079090  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:08.093653  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:08.093691  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:08.228823  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:08.228864  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:08.196022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.196145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:12.196843  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:11.340269  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:13.840055  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.780836  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:59:10.793555  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:59:10.794839  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:59:10.794868  876668 api_server.go:131] duration metric: took 3.986808086s to wait for apiserver health ...
	I1114 15:59:10.794878  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:10.794907  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:10.794989  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:10.842028  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:10.842050  876668 cri.go:89] found id: ""
	I1114 15:59:10.842059  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:10.842113  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.846938  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:10.847030  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:10.893360  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:10.893386  876668 cri.go:89] found id: ""
	I1114 15:59:10.893394  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:10.893443  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.899601  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:10.899669  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:10.949519  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:10.949542  876668 cri.go:89] found id: ""
	I1114 15:59:10.949550  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:10.949602  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.953875  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:10.953936  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:10.994565  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:10.994595  876668 cri.go:89] found id: ""
	I1114 15:59:10.994605  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:10.994659  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.999120  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:10.999187  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:11.039364  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.039392  876668 cri.go:89] found id: ""
	I1114 15:59:11.039403  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:11.039509  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.044115  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:11.044174  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:11.088803  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.088835  876668 cri.go:89] found id: ""
	I1114 15:59:11.088846  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:11.088917  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.094005  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:11.094076  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:11.145247  876668 cri.go:89] found id: ""
	I1114 15:59:11.145276  876668 logs.go:284] 0 containers: []
	W1114 15:59:11.145285  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:11.145294  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:11.145355  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:11.188916  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.188950  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.188957  876668 cri.go:89] found id: ""
	I1114 15:59:11.188967  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:11.189029  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.195578  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.200146  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:11.200174  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:11.240413  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:11.240458  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.290614  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:11.290648  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:11.638700  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:11.638743  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:11.654234  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:11.654267  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.709147  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:11.709184  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:11.751661  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:11.751701  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.796993  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:11.797041  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.841478  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:11.841510  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:11.972862  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:11.972902  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:12.019217  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:12.019260  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:12.073396  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:12.073443  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:12.142653  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:12.142694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:14.704129  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:59:14.704159  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.704167  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.704173  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.704179  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.704184  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.704191  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.704200  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.704207  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.704217  876668 system_pods.go:74] duration metric: took 3.909331461s to wait for pod list to return data ...
	I1114 15:59:14.704231  876668 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:14.706920  876668 default_sa.go:45] found service account: "default"
	I1114 15:59:14.706944  876668 default_sa.go:55] duration metric: took 2.702527ms for default service account to be created ...
	I1114 15:59:14.706954  876668 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:14.714049  876668 system_pods.go:86] 8 kube-system pods found
	I1114 15:59:14.714080  876668 system_pods.go:89] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.714089  876668 system_pods.go:89] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.714096  876668 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.714101  876668 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.714106  876668 system_pods.go:89] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.714113  876668 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.714128  876668 system_pods.go:89] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.714142  876668 system_pods.go:89] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.714152  876668 system_pods.go:126] duration metric: took 7.191238ms to wait for k8s-apps to be running ...
	I1114 15:59:14.714174  876668 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:59:14.714231  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:14.734987  876668 system_svc.go:56] duration metric: took 20.804278ms WaitForService to wait for kubelet.
	I1114 15:59:14.735015  876668 kubeadm.go:581] duration metric: took 4m24.868931304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:59:14.735038  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:59:14.737844  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:59:14.737868  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:59:14.737878  876668 node_conditions.go:105] duration metric: took 2.834918ms to run NodePressure ...
	I1114 15:59:14.737889  876668 start.go:228] waiting for startup goroutines ...
	I1114 15:59:14.737895  876668 start.go:233] waiting for cluster config update ...
	I1114 15:59:14.737905  876668 start.go:242] writing updated cluster config ...
	I1114 15:59:14.738157  876668 ssh_runner.go:195] Run: rm -f paused
	I1114 15:59:14.791076  876668 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:59:14.793853  876668 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-529430" cluster and "default" namespace by default
	I1114 15:59:14.694842  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:15.887599  876396 pod_ready.go:81] duration metric: took 4m0.000892827s waiting for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:15.887641  876396 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:15.887664  876396 pod_ready.go:38] duration metric: took 4m1.199797165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:15.887694  876396 kubeadm.go:640] restartCluster took 5m7.501574769s
	W1114 15:59:15.887782  876396 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:15.887859  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:16.340114  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:18.340157  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:20.901839  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.013944828s)
	I1114 15:59:20.901933  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:20.915929  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:20.928081  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:20.937656  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:20.937756  876396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1114 15:59:20.998439  876396 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1114 15:59:20.998593  876396 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:21.145429  876396 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:21.145639  876396 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:21.145777  876396 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:21.387825  876396 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:21.388897  876396 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:21.396490  876396 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1114 15:59:21.518176  876396 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:21.520261  876396 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:21.520398  876396 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:21.520496  876396 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:21.520590  876396 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:21.520686  876396 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:21.520797  876396 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:21.520918  876396 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:21.521009  876396 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:21.521434  876396 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:21.521822  876396 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:21.522333  876396 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:21.522651  876396 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:21.522730  876396 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:21.707438  876396 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:21.890929  876396 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:22.058077  876396 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:22.234616  876396 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:22.235636  876396 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:22.237626  876396 out.go:204]   - Booting up control plane ...
	I1114 15:59:22.237743  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:22.241964  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:22.242976  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:22.244745  876396 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:22.248349  876396 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:20.341685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:22.838566  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:25.337887  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:27.341368  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.256998  876396 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005833 seconds
	I1114 15:59:32.257145  876396 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:32.272061  876396 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:32.797161  876396 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:32.797367  876396 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-842105 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 15:59:33.314721  876396 kubeadm.go:322] [bootstrap-token] Using token: 04dlot.9kpu87sb3ajm8dfs
	I1114 15:59:33.316454  876396 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:33.316628  876396 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:33.324455  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:33.328877  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:33.335460  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:33.339307  876396 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:33.422742  876396 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:33.757796  876396 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:33.759150  876396 kubeadm.go:322] 
	I1114 15:59:33.759248  876396 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:33.759281  876396 kubeadm.go:322] 
	I1114 15:59:33.759442  876396 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:33.759459  876396 kubeadm.go:322] 
	I1114 15:59:33.759495  876396 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:33.759577  876396 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:33.759647  876396 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:33.759657  876396 kubeadm.go:322] 
	I1114 15:59:33.759726  876396 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:33.759828  876396 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:33.759922  876396 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:33.759931  876396 kubeadm.go:322] 
	I1114 15:59:33.760050  876396 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1114 15:59:33.760143  876396 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:33.760154  876396 kubeadm.go:322] 
	I1114 15:59:33.760239  876396 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760360  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:33.760397  876396 kubeadm.go:322]     --control-plane 	  
	I1114 15:59:33.760408  876396 kubeadm.go:322] 
	I1114 15:59:33.760517  876396 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:33.760527  876396 kubeadm.go:322] 
	I1114 15:59:33.760624  876396 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760781  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:33.764918  876396 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:33.764993  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:59:33.765010  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:33.767708  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:29.839580  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.339612  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:33.072424  876065 pod_ready.go:81] duration metric: took 4m0.000921839s waiting for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:33.072553  876065 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:33.072606  876065 pod_ready.go:38] duration metric: took 4m10.602378093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:33.072664  876065 kubeadm.go:640] restartCluster took 4m30.632686786s
	W1114 15:59:33.072782  876065 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:33.073057  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:33.769398  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:33.781327  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:33.810672  876396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:33.810839  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:33.810927  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=old-k8s-version-842105 minikube.k8s.io/updated_at=2023_11_14T15_59_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.181391  876396 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:34.181528  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.301381  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.919870  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.419262  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.919637  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.419780  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.919453  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.420046  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.919605  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.419845  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.919474  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.419303  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.919616  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.419633  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.919220  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.419298  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.919396  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.420042  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.919886  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.419274  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.920217  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.419952  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.919511  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.419619  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.919762  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.420141  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.919676  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.261922  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.188828866s)
	I1114 15:59:47.262031  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:47.276268  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:47.285701  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:47.294481  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:47.294540  876065 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:59:47.348856  876065 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:59:47.348959  876065 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:47.530233  876065 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:47.530413  876065 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:47.530581  876065 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:47.784516  876065 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:47.420108  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.920005  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.419707  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.527158  876396 kubeadm.go:1081] duration metric: took 14.716377346s to wait for elevateKubeSystemPrivileges.
	I1114 15:59:48.527193  876396 kubeadm.go:406] StartCluster complete in 5m40.211957984s
	I1114 15:59:48.527213  876396 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.527323  876396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:59:48.529723  876396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.530058  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:59:48.530134  876396 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:59:48.530222  876396 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530248  876396 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-842105"
	W1114 15:59:48.530257  876396 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:59:48.530256  876396 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530285  876396 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-842105"
	W1114 15:59:48.530297  876396 addons.go:240] addon metrics-server should already be in state true
	I1114 15:59:48.530321  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530342  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530354  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:59:48.530429  876396 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530457  876396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-842105"
	I1114 15:59:48.530764  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530793  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530805  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530795  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530818  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530822  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.549568  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1114 15:59:48.549642  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I1114 15:59:48.550081  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550240  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550734  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550755  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.550866  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550887  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.551164  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551425  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551622  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.551766  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.551813  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.552539  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1114 15:59:48.553028  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.554044  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.554063  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.554522  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.555069  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555106  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.555404  876396 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-842105"
	W1114 15:59:48.555470  876396 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:59:48.555516  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.555924  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555961  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.576876  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I1114 15:59:48.576912  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I1114 15:59:48.576878  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1114 15:59:48.577223  876396 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-842105" context rescaled to 1 replicas
	I1114 15:59:48.577266  876396 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:59:48.579711  876396 out.go:177] * Verifying Kubernetes components...
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577672  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.581751  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:48.580402  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581791  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580422  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581852  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580432  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581919  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.582238  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582286  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582314  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582439  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.582735  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.582751  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.583264  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.584865  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.586792  876396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:59:48.585415  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.588364  876396 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.588378  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:59:48.588398  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.592854  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.594307  876396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:59:47.786524  876065 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:47.786668  876065 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:47.786744  876065 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:47.786843  876065 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:47.786912  876065 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:47.787108  876065 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:47.787698  876065 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:47.788301  876065 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:47.788930  876065 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:47.789533  876065 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:47.790115  876065 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:47.790449  876065 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:47.790523  876065 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:47.975724  876065 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:48.056071  876065 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:48.340177  876065 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:48.733230  876065 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:48.734350  876065 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:48.738369  876065 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:48.740013  876065 out.go:204]   - Booting up control plane ...
	I1114 15:59:48.740143  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:48.740271  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:48.743856  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:48.763450  876065 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:48.764688  876065 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:48.764768  876065 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:59:48.932286  876065 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:48.592918  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.593079  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.595739  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:59:48.595754  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:59:48.595776  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.595826  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.595852  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.596957  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.597212  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.599011  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599448  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.599710  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599755  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.599975  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.600142  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.600304  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.607351  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I1114 15:59:48.607929  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.608484  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.608509  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.608998  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.609237  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.610958  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.611196  876396 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.611210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:59:48.611228  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.613709  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614297  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.614322  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614366  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.614539  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.614631  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.614711  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.708399  876396 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.708481  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:59:48.715087  876396 node_ready.go:49] node "old-k8s-version-842105" has status "Ready":"True"
	I1114 15:59:48.715111  876396 node_ready.go:38] duration metric: took 6.675707ms waiting for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.715124  876396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718748  876396 pod_ready.go:38] duration metric: took 3.605786ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718790  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:48.718857  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:48.750191  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.773186  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:59:48.773210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:59:48.788782  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.847057  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:59:48.847090  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:59:48.905401  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:48.905442  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:59:48.986582  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:49.606449  876396 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1114 15:59:49.606451  876396 api_server.go:72] duration metric: took 1.029145444s to wait for apiserver process to appear ...
	I1114 15:59:49.606506  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:49.606530  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:59:49.709702  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.709732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.710100  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.710130  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.710144  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.710153  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.711953  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.711985  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.711994  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.755976  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:59:49.756696  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.756719  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.757036  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.757103  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.757121  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.757390  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:59:49.757410  876396 api_server.go:131] duration metric: took 150.89717ms to wait for apiserver health ...
	I1114 15:59:49.757447  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:49.763460  876396 system_pods.go:59] 2 kube-system pods found
	I1114 15:59:49.763487  876396 system_pods.go:61] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.763497  876396 system_pods.go:61] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.763509  876396 system_pods.go:74] duration metric: took 6.051168ms to wait for pod list to return data ...
	I1114 15:59:49.763518  876396 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:49.776313  876396 default_sa.go:45] found service account: "default"
	I1114 15:59:49.776341  876396 default_sa.go:55] duration metric: took 12.814566ms for default service account to be created ...
	I1114 15:59:49.776351  876396 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:49.782462  876396 system_pods.go:86] 2 kube-system pods found
	I1114 15:59:49.782502  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.782518  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.782544  876396 retry.go:31] will retry after 311.640315ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.157150  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368304542s)
	I1114 15:59:50.157269  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157286  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.157688  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.157711  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.157730  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157743  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.158219  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.158270  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.169219  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.169264  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.169275  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.169282  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending
	I1114 15:59:50.169304  876396 retry.go:31] will retry after 335.621385ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.357400  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.370764048s)
	I1114 15:59:50.357474  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.357494  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.359782  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.359789  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.359811  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.359829  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.359840  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.360228  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.360264  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.360285  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.360333  876396 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-842105"
	I1114 15:59:50.362545  876396 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:59:50.364302  876396 addons.go:502] enable addons completed in 1.834168315s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:59:50.616547  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.616597  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.616608  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.616623  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.616645  876396 retry.go:31] will retry after 349.737645ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.971245  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.971286  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.971298  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.971312  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.971333  876396 retry.go:31] will retry after 562.981893ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:51.541777  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:51.541822  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:51.541849  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:51.541862  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:51.541870  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:51.541892  876396 retry.go:31] will retry after 617.692214ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:52.166157  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.166192  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.166199  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.166207  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.166211  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.166227  876396 retry.go:31] will retry after 671.968353ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:52.844235  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.844269  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.844276  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.844285  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.844290  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.844309  876396 retry.go:31] will retry after 955.353451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:53.814593  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:53.814626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:53.814636  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:53.814651  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:53.814661  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:53.814680  876396 retry.go:31] will retry after 1.306938168s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:55.127401  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:55.127436  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:55.127445  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:55.127457  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:55.127465  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:55.127488  876396 retry.go:31] will retry after 1.627615182s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.759304  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:56.759339  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:56.759345  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:56.759353  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:56.759358  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:56.759373  876396 retry.go:31] will retry after 2.046606031s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.936792  876065 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004387 seconds
	I1114 15:59:56.936992  876065 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:56.965969  876065 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:57.504894  876065 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:57.505171  876065 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-490998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:59:58.021451  876065 kubeadm.go:322] [bootstrap-token] Using token: 3x3ma3.qtutj9fi1nmgzc3r
	I1114 15:59:58.023064  876065 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:58.023220  876065 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:58.028334  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:59:58.039638  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:58.043783  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:58.048814  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:58.061419  876065 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:58.075996  876065 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:59:58.328245  876065 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:58.435170  876065 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:58.436684  876065 kubeadm.go:322] 
	I1114 15:59:58.436781  876065 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:58.436796  876065 kubeadm.go:322] 
	I1114 15:59:58.436889  876065 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:58.436932  876065 kubeadm.go:322] 
	I1114 15:59:58.436988  876065 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:58.437091  876065 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:58.437155  876065 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:58.437176  876065 kubeadm.go:322] 
	I1114 15:59:58.437231  876065 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:59:58.437239  876065 kubeadm.go:322] 
	I1114 15:59:58.437281  876065 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:59:58.437288  876065 kubeadm.go:322] 
	I1114 15:59:58.437353  876065 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:58.437449  876065 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:58.437564  876065 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:58.437574  876065 kubeadm.go:322] 
	I1114 15:59:58.437684  876065 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:59:58.437800  876065 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:58.437816  876065 kubeadm.go:322] 
	I1114 15:59:58.437937  876065 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438087  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:58.438116  876065 kubeadm.go:322] 	--control-plane 
	I1114 15:59:58.438124  876065 kubeadm.go:322] 
	I1114 15:59:58.438194  876065 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:58.438202  876065 kubeadm.go:322] 
	I1114 15:59:58.438267  876065 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438355  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:58.442217  876065 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:58.442251  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:59:58.442263  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:58.444078  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:58.445560  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:58.467849  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:58.501795  876065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:58.501941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.501965  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=no-preload-490998 minikube.k8s.io/updated_at=2023_11_14T15_59_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.557314  876065 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:58.891105  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:59.006867  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.811870  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:58.811905  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:58.811912  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:58.811920  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:58.811924  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:58.811939  876396 retry.go:31] will retry after 2.166453413s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:00.984597  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:00.984626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:00.984632  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:00.984638  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:00.984643  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:00.984661  876396 retry.go:31] will retry after 2.339496963s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:59.620843  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.120941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.621244  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.121507  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.621512  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.121367  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.621449  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.120920  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.620857  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.329034  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:03.329061  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:03.329067  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:03.329074  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:03.329078  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:03.329097  876396 retry.go:31] will retry after 3.593700907s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:06.929268  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:06.929308  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:06.929316  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:06.929327  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:06.929335  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:06.929357  876396 retry.go:31] will retry after 4.929780079s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:04.121245  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:04.620976  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.120894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.621609  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.121209  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.621322  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.121613  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.620968  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.121482  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.621166  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.121032  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.620894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.120992  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.621306  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.121427  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.299388  876065 kubeadm.go:1081] duration metric: took 12.79751335s to wait for elevateKubeSystemPrivileges.
	I1114 16:00:11.299429  876065 kubeadm.go:406] StartCluster complete in 5m8.910317864s
	I1114 16:00:11.299489  876065 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.299594  876065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:00:11.301841  876065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.302097  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 16:00:11.302144  876065 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 16:00:11.302251  876065 addons.go:69] Setting storage-provisioner=true in profile "no-preload-490998"
	I1114 16:00:11.302268  876065 addons.go:69] Setting default-storageclass=true in profile "no-preload-490998"
	I1114 16:00:11.302287  876065 addons.go:231] Setting addon storage-provisioner=true in "no-preload-490998"
	W1114 16:00:11.302301  876065 addons.go:240] addon storage-provisioner should already be in state true
	I1114 16:00:11.302296  876065 addons.go:69] Setting metrics-server=true in profile "no-preload-490998"
	I1114 16:00:11.302327  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:00:11.302346  876065 addons.go:231] Setting addon metrics-server=true in "no-preload-490998"
	W1114 16:00:11.302360  876065 addons.go:240] addon metrics-server should already be in state true
	I1114 16:00:11.302361  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302408  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302287  876065 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-490998"
	I1114 16:00:11.302858  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302926  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302942  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302956  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302863  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.303043  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.323089  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I1114 16:00:11.323101  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1114 16:00:11.323750  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.323807  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.324339  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324362  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324554  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324577  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324806  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I1114 16:00:11.325059  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325120  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325172  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.325617  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.325652  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326120  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.326138  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.326359  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.326398  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326499  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.326665  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.330090  876065 addons.go:231] Setting addon default-storageclass=true in "no-preload-490998"
	W1114 16:00:11.330115  876065 addons.go:240] addon default-storageclass should already be in state true
	I1114 16:00:11.330144  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.330381  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.330415  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.347198  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I1114 16:00:11.347385  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I1114 16:00:11.347562  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I1114 16:00:11.347721  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347785  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347897  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.348216  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348232  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348346  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348358  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348366  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348370  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348593  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348729  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348878  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348947  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349143  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349223  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.349270  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.351308  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.353786  876065 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 16:00:11.352409  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.355097  876065 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.355119  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 16:00:11.355141  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.356613  876065 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 16:00:11.357928  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 16:00:11.357949  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 16:00:11.357969  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.358548  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359421  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.359450  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359652  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.359922  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.360221  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.360379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.362075  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362508  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.362532  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362831  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.363041  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.363234  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.363390  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.379820  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I1114 16:00:11.380297  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.380905  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.380935  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.381326  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.381573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.383433  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.383722  876065 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.383741  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 16:00:11.383762  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.386432  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.386813  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.386845  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.387062  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.387311  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.387490  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.387661  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.450418  876065 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-490998" context rescaled to 1 replicas
	I1114 16:00:11.450472  876065 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:00:11.452499  876065 out.go:177] * Verifying Kubernetes components...
	I1114 16:00:11.864833  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:11.864867  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:11.864875  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:11.864884  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:11.864891  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:11.864918  876396 retry.go:31] will retry after 6.141765036s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:11.454141  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:11.560863  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.582400  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 16:00:11.582423  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 16:00:11.596910  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.626625  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 16:00:11.626652  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 16:00:11.634166  876065 node_ready.go:35] waiting up to 6m0s for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.634309  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 16:00:11.706391  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.706421  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 16:00:11.737914  876065 node_ready.go:49] node "no-preload-490998" has status "Ready":"True"
	I1114 16:00:11.737955  876065 node_ready.go:38] duration metric: took 103.74965ms waiting for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.737969  876065 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:11.795522  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.910850  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:13.838426  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.277507449s)
	I1114 16:00:13.838488  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838481  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.241527225s)
	I1114 16:00:13.838530  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838555  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838501  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838599  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.204200469s)
	I1114 16:00:13.838636  876065 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1114 16:00:13.838941  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.838992  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839001  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839008  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839016  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.839032  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839047  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839057  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839066  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841315  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841335  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.841398  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841418  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855083  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.059516605s)
	I1114 16:00:13.855146  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855169  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855524  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855572  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855588  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855600  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855612  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855921  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855949  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855961  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855979  876065 addons.go:467] Verifying addon metrics-server=true in "no-preload-490998"
	I1114 16:00:13.864145  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.864168  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.864444  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.864480  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.864491  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.867459  876065 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1114 16:00:13.868861  876065 addons.go:502] enable addons completed in 2.566733189s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1114 16:00:14.067240  876065 pod_ready.go:97] error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067289  876065 pod_ready.go:81] duration metric: took 2.15639988s waiting for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	E1114 16:00:14.067306  876065 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067315  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140385  876065 pod_ready.go:92] pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.140412  876065 pod_ready.go:81] duration metric: took 2.07308909s waiting for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140422  876065 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145818  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.145837  876065 pod_ready.go:81] duration metric: took 5.409163ms waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145845  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150850  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.150868  876065 pod_ready.go:81] duration metric: took 5.017013ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150877  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155895  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.155919  876065 pod_ready.go:81] duration metric: took 5.034132ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155931  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254239  876065 pod_ready.go:92] pod "kube-proxy-9nc8j" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.254270  876065 pod_ready.go:81] duration metric: took 98.331009ms waiting for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254282  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653014  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.653041  876065 pod_ready.go:81] duration metric: took 398.751468ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653049  876065 pod_ready.go:38] duration metric: took 4.915065516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:16.653066  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 16:00:16.653118  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 16:00:16.670396  876065 api_server.go:72] duration metric: took 5.219889322s to wait for apiserver process to appear ...
	I1114 16:00:16.670430  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 16:00:16.670450  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 16:00:16.675936  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 16:00:16.677570  876065 api_server.go:141] control plane version: v1.28.3
	I1114 16:00:16.677592  876065 api_server.go:131] duration metric: took 7.155742ms to wait for apiserver health ...
	I1114 16:00:16.677601  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 16:00:16.858468  876065 system_pods.go:59] 8 kube-system pods found
	I1114 16:00:16.858500  876065 system_pods.go:61] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:16.858505  876065 system_pods.go:61] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:16.858509  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:16.858514  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:16.858518  876065 system_pods.go:61] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:16.858522  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:16.858529  876065 system_pods.go:61] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:16.858534  876065 system_pods.go:61] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:16.858543  876065 system_pods.go:74] duration metric: took 180.935707ms to wait for pod list to return data ...
	I1114 16:00:16.858551  876065 default_sa.go:34] waiting for default service account to be created ...
	I1114 16:00:17.053423  876065 default_sa.go:45] found service account: "default"
	I1114 16:00:17.053478  876065 default_sa.go:55] duration metric: took 194.91891ms for default service account to be created ...
	I1114 16:00:17.053491  876065 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 16:00:17.256504  876065 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:17.256539  876065 system_pods.go:89] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:17.256547  876065 system_pods.go:89] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:17.256554  876065 system_pods.go:89] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:17.256561  876065 system_pods.go:89] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:17.256567  876065 system_pods.go:89] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:17.256572  876065 system_pods.go:89] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:17.256582  876065 system_pods.go:89] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:17.256589  876065 system_pods.go:89] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:17.256602  876065 system_pods.go:126] duration metric: took 203.104027ms to wait for k8s-apps to be running ...
	I1114 16:00:17.256615  876065 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:17.256682  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:17.273098  876065 system_svc.go:56] duration metric: took 16.455935ms WaitForService to wait for kubelet.
	I1114 16:00:17.273135  876065 kubeadm.go:581] duration metric: took 5.822636312s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:17.273162  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:17.453601  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:17.453635  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:17.453675  876065 node_conditions.go:105] duration metric: took 180.505934ms to run NodePressure ...
	I1114 16:00:17.453692  876065 start.go:228] waiting for startup goroutines ...
	I1114 16:00:17.453706  876065 start.go:233] waiting for cluster config update ...
	I1114 16:00:17.453748  876065 start.go:242] writing updated cluster config ...
	I1114 16:00:17.454022  876065 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:17.505999  876065 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 16:00:17.509514  876065 out.go:177] * Done! kubectl is now configured to use "no-preload-490998" cluster and "default" namespace by default
	I1114 16:00:18.012940  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:18.012980  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:18.012988  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:18.012998  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:18.013007  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:18.013032  876396 retry.go:31] will retry after 7.087138718s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:25.105773  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:25.105804  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:25.105809  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:25.105817  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:25.105822  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:25.105842  876396 retry.go:31] will retry after 8.539395127s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:33.651084  876396 system_pods.go:86] 6 kube-system pods found
	I1114 16:00:33.651116  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:33.651121  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:33.651125  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:33.651129  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:33.651136  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:33.651141  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:33.651159  876396 retry.go:31] will retry after 10.428154724s: missing components: etcd, kube-apiserver
	I1114 16:00:44.086463  876396 system_pods.go:86] 7 kube-system pods found
	I1114 16:00:44.086496  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:44.086501  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:44.086506  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:44.086511  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:44.086515  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:44.086522  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:44.086527  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:44.086546  876396 retry.go:31] will retry after 10.535877375s: missing components: kube-apiserver
	I1114 16:00:54.631194  876396 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:54.631230  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:54.631237  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:54.631244  876396 system_pods.go:89] "kube-apiserver-old-k8s-version-842105" [3035c074-63ca-4b23-a375-415210397d17] Running
	I1114 16:00:54.631252  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:54.631259  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:54.631265  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:54.631275  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:54.631291  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:54.631304  876396 system_pods.go:126] duration metric: took 1m4.854946282s to wait for k8s-apps to be running ...
	I1114 16:00:54.631317  876396 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:54.631470  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:54.648616  876396 system_svc.go:56] duration metric: took 17.286024ms WaitForService to wait for kubelet.
	I1114 16:00:54.648650  876396 kubeadm.go:581] duration metric: took 1m6.071350783s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:54.648677  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:54.652020  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:54.652055  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:54.652069  876396 node_conditions.go:105] duration metric: took 3.385579ms to run NodePressure ...
	I1114 16:00:54.652085  876396 start.go:228] waiting for startup goroutines ...
	I1114 16:00:54.652093  876396 start.go:233] waiting for cluster config update ...
	I1114 16:00:54.652106  876396 start.go:242] writing updated cluster config ...
	I1114 16:00:54.652418  876396 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:54.706394  876396 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1114 16:00:54.708374  876396 out.go:177] 
	W1114 16:00:54.709776  876396 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1114 16:00:54.711177  876396 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1114 16:00:54.712775  876396 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-842105" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:54:33 UTC, ends at Tue 2023-11-14 16:09:19 UTC. --
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.189711510Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=1b6505e9-8ff2-4be1-a67b-e218fd9df623 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.189852010Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1699977590768531361,StartedAt:1699977592374888537,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e7a8cdb1abe81115f9f4ddf44f4541,},Annotations:map[string]string{io.kubernetes.container.hash: 6c681eab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d4e7a8cdb1abe81115f9f4ddf44f4541/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d4e7a8cdb1abe81115f9f4ddf44f4541/containers/etcd/b41b1091,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-no-preload-490998_d4e7a8cdb1abe81115f9f4ddf44f4541/etcd/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=1b6505e9-8ff2-4be1-a67b-e218fd9df623 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.190279801Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=f850f6e4-1b5e-4830-99c3-0c3f275b9d07 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.190399435Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1699977590425271468,StartedAt:1699977591398747730,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b62aaaa08313b0380ea33995759132a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8b62aaaa08313b0380ea33995759132a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8b62aaaa08313b0380ea33995759132a/containers/kube-controller-manager/90eae46e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-no-preload-490998_8b62aaaa08313b0380ea33995759132a/kube-controller-manager/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=f850f6e4-1b5e-4830-99c3-0c3f275b9d07 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.203932223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5a4aada5-3297-419e-b2c8-f783a779a535 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.204077120Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5a4aada5-3297-419e-b2c8-f783a779a535 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.205216860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=578f2c49-b359-4402-ac9c-551fef62c7cd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.205780384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978159205764116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=578f2c49-b359-4402-ac9c-551fef62c7cd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.206538439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f7fc0546-cf97-4035-ac2e-e53ecd473aaa name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.206613575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f7fc0546-cf97-4035-ac2e-e53ecd473aaa name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.206820306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b,PodSandboxId:152ae7f3a0d6a4b08d01a8d537ca3774f4993a1f42189ee162edb9a1495629af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699977615336575909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a23261de-849c-41b5-9e5f-7230461b67d8,},Annotations:map[string]string{io.kubernetes.container.hash: 152bd272,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874,PodSandboxId:7bbe0277a33b36bc9f456a2e0cb847888b9feae8a79edc72a40aba69e04cb264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699977615185050571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9nc8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19df2d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb,PodSandboxId:bf58e53fbfda07749e339691ea969198ace26d3bf1ed7e35dacf163873c08f98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699977614630486862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-khvq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 30205a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30,PodSandboxId:6d0dbe1c66e6393f6b75ed2c27b7b8ed867ac819bec76e15928faeecfd401bd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699977590951450497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
efba73e1c365132017949c57e903b533,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a,PodSandboxId:bd0e50c61e6d5b1f740e6201a8d010b8dc09bcdbb86c6bbc98c010b554e31d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699977590694018311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b508739bef
8b7b42857234904491d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9654ba19,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,PodSandboxId:5383ecf8d0030486809a018ef8c8befc19ce84a1f50d5ee9b451eedc1728dd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699977590483673939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e7a8cdb1abe81115f9f4ddf44f4541,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6c681eab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,PodSandboxId:cf0343989e81eca713d2f60761c776441139af451ec7a1fc43768e47962441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699977590284795211,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b62aaaa08313b0380ea33995759132a,},An
notations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7fc0546-cf97-4035-ac2e-e53ecd473aaa name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.245629707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f09e9478-158c-46a3-b30a-63879d66a6f7 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.245709740Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f09e9478-158c-46a3-b30a-63879d66a6f7 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.247008917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fa0e7511-bf76-426d-8145-5a780c8bd499 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.247333671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978159247322688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=fa0e7511-bf76-426d-8145-5a780c8bd499 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.247899085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a7ffec35-9213-4280-ac35-1f2040f9d51a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.247998990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a7ffec35-9213-4280-ac35-1f2040f9d51a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.248202198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b,PodSandboxId:152ae7f3a0d6a4b08d01a8d537ca3774f4993a1f42189ee162edb9a1495629af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699977615336575909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a23261de-849c-41b5-9e5f-7230461b67d8,},Annotations:map[string]string{io.kubernetes.container.hash: 152bd272,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874,PodSandboxId:7bbe0277a33b36bc9f456a2e0cb847888b9feae8a79edc72a40aba69e04cb264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699977615185050571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9nc8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19df2d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb,PodSandboxId:bf58e53fbfda07749e339691ea969198ace26d3bf1ed7e35dacf163873c08f98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699977614630486862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-khvq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 30205a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30,PodSandboxId:6d0dbe1c66e6393f6b75ed2c27b7b8ed867ac819bec76e15928faeecfd401bd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699977590951450497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
efba73e1c365132017949c57e903b533,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a,PodSandboxId:bd0e50c61e6d5b1f740e6201a8d010b8dc09bcdbb86c6bbc98c010b554e31d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699977590694018311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b508739bef
8b7b42857234904491d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9654ba19,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,PodSandboxId:5383ecf8d0030486809a018ef8c8befc19ce84a1f50d5ee9b451eedc1728dd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699977590483673939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e7a8cdb1abe81115f9f4ddf44f4541,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6c681eab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,PodSandboxId:cf0343989e81eca713d2f60761c776441139af451ec7a1fc43768e47962441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699977590284795211,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b62aaaa08313b0380ea33995759132a,},An
notations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a7ffec35-9213-4280-ac35-1f2040f9d51a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.288704668Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5e0b6c67-8b83-4e1f-99a7-ae138ce14bfa name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.288808231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5e0b6c67-8b83-4e1f-99a7-ae138ce14bfa name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.290697710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e2004264-aeca-4f5b-ba9e-42c7b1c35488 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.293073650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978159291200768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e2004264-aeca-4f5b-ba9e-42c7b1c35488 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.295881748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0638a246-04f1-44a9-b5ad-563769e1ec9a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.296011264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0638a246-04f1-44a9-b5ad-563769e1ec9a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:19 no-preload-490998 crio[726]: time="2023-11-14 16:09:19.296169136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b,PodSandboxId:152ae7f3a0d6a4b08d01a8d537ca3774f4993a1f42189ee162edb9a1495629af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699977615336575909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a23261de-849c-41b5-9e5f-7230461b67d8,},Annotations:map[string]string{io.kubernetes.container.hash: 152bd272,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874,PodSandboxId:7bbe0277a33b36bc9f456a2e0cb847888b9feae8a79edc72a40aba69e04cb264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699977615185050571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9nc8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19df2d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb,PodSandboxId:bf58e53fbfda07749e339691ea969198ace26d3bf1ed7e35dacf163873c08f98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699977614630486862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-khvq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 30205a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30,PodSandboxId:6d0dbe1c66e6393f6b75ed2c27b7b8ed867ac819bec76e15928faeecfd401bd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699977590951450497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
efba73e1c365132017949c57e903b533,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a,PodSandboxId:bd0e50c61e6d5b1f740e6201a8d010b8dc09bcdbb86c6bbc98c010b554e31d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699977590694018311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b508739bef
8b7b42857234904491d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9654ba19,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,PodSandboxId:5383ecf8d0030486809a018ef8c8befc19ce84a1f50d5ee9b451eedc1728dd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699977590483673939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e7a8cdb1abe81115f9f4ddf44f4541,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6c681eab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,PodSandboxId:cf0343989e81eca713d2f60761c776441139af451ec7a1fc43768e47962441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699977590284795211,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b62aaaa08313b0380ea33995759132a,},An
notations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0638a246-04f1-44a9-b5ad-563769e1ec9a name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c16c8a8b7d924       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   152ae7f3a0d6a       storage-provisioner
	206abe3a8e40b       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   7bbe0277a33b3       kube-proxy-9nc8j
	a1d2a3ee458c4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   bf58e53fbfda0       coredns-5dd5756b68-khvq4
	2ff2b10d3fae8       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   6d0dbe1c66e63       kube-scheduler-no-preload-490998
	c5e3f1f96b48b       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   bd0e50c61e6d5       kube-apiserver-no-preload-490998
	52c9022a0dbcb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   5383ecf8d0030       etcd-no-preload-490998
	e7ca7216e4f95       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   cf0343989e81e       kube-controller-manager-no-preload-490998
	
	* 
	* ==> coredns [a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-490998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-490998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=no-preload-490998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_59_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:59:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-490998
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 16:09:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 16:05:24 +0000   Tue, 14 Nov 2023 15:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 16:05:24 +0000   Tue, 14 Nov 2023 15:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 16:05:24 +0000   Tue, 14 Nov 2023 15:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 16:05:24 +0000   Tue, 14 Nov 2023 15:59:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.251
	  Hostname:    no-preload-490998
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3b444f88fbc44fea26e699ddb0dadbc
	  System UUID:                e3b444f8-8fbc-44fe-a26e-699ddb0dadbc
	  Boot ID:                    6de318c0-2cd2-4464-a975-083168e9b66f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-khvq4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-490998                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-no-preload-490998             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-490998    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-9nc8j                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-490998             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-cljst              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m30s (x8 over 9m30s)  kubelet          Node no-preload-490998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s (x8 over 9m30s)  kubelet          Node no-preload-490998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s (x7 over 9m30s)  kubelet          Node no-preload-490998 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node no-preload-490998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node no-preload-490998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node no-preload-490998 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node no-preload-490998 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s                  kubelet          Node no-preload-490998 status is now: NodeReady
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m8s                   node-controller  Node no-preload-490998 event: Registered Node no-preload-490998 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov14 15:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075571] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.751720] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.347618] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150840] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.536651] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.228115] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.149748] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.167735] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.124031] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.264372] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[Nov14 15:55] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[ +19.347768] kauditd_printk_skb: 29 callbacks suppressed
	[Nov14 15:59] systemd-fstab-generator[3886]: Ignoring "noauto" for root device
	[  +9.316479] systemd-fstab-generator[4209]: Ignoring "noauto" for root device
	[Nov14 16:00] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b] <==
	* {"level":"info","ts":"2023-11-14T15:59:52.571549Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-14T15:59:52.572894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 switched to configuration voters=(4871685925895463137)"}
	{"level":"info","ts":"2023-11-14T15:59:52.573378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dd9b68cf7bac6d9","local-member-id":"439bb489ce44e0e1","added-peer-id":"439bb489ce44e0e1","added-peer-peer-urls":["https://192.168.50.251:2380"]}
	{"level":"info","ts":"2023-11-14T15:59:52.572719Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"439bb489ce44e0e1","initial-advertise-peer-urls":["https://192.168.50.251:2380"],"listen-peer-urls":["https://192.168.50.251:2380"],"advertise-client-urls":["https://192.168.50.251:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.251:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-14T15:59:52.573834Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-14T15:59:52.572136Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.251:2380"}
	{"level":"info","ts":"2023-11-14T15:59:52.574832Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.251:2380"}
	{"level":"info","ts":"2023-11-14T15:59:53.12094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-14T15:59:53.121104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-14T15:59:53.121151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 received MsgPreVoteResp from 439bb489ce44e0e1 at term 1"}
	{"level":"info","ts":"2023-11-14T15:59:53.121186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:59:53.12122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 received MsgVoteResp from 439bb489ce44e0e1 at term 2"}
	{"level":"info","ts":"2023-11-14T15:59:53.121253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became leader at term 2"}
	{"level":"info","ts":"2023-11-14T15:59:53.121279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 439bb489ce44e0e1 elected leader 439bb489ce44e0e1 at term 2"}
	{"level":"info","ts":"2023-11-14T15:59:53.122758Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"439bb489ce44e0e1","local-member-attributes":"{Name:no-preload-490998 ClientURLs:[https://192.168.50.251:2379]}","request-path":"/0/members/439bb489ce44e0e1/attributes","cluster-id":"dd9b68cf7bac6d9","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:59:53.123124Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:59:53.124093Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:59:53.124868Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.251:2379"}
	{"level":"info","ts":"2023-11-14T15:59:53.125026Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:59:53.125064Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:59:53.125399Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:59:53.125942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:59:53.126641Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dd9b68cf7bac6d9","local-member-id":"439bb489ce44e0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:59:53.126744Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:59:53.126782Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  16:09:19 up 14 min,  0 users,  load average: 0.28, 0.29, 0.24
	Linux no-preload-490998 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a] <==
	* W1114 16:04:55.879102       1 handler_proxy.go:93] no RequestInfo found in the context
	W1114 16:04:55.879102       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:04:55.879361       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:04:55.879374       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1114 16:04:55.879410       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:04:55.881426       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:05:54.761490       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:05:55.880315       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:05:55.880432       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:05:55.880461       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:05:55.881761       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:05:55.881879       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:05:55.881920       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:06:54.761601       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 16:07:54.761008       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:07:55.881121       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:07:55.881288       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:07:55.881327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:07:55.882474       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:07:55.882629       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:07:55.882672       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:08:54.762514       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4] <==
	* I1114 16:03:41.734457       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:04:11.288730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:04:11.744047       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:04:41.300053       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:04:41.755151       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:05:11.306392       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:05:11.765126       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:05:41.313079       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:05:41.773571       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:06:04.587434       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.2µs"
	E1114 16:06:11.323329       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:06:11.782168       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:06:15.583821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="384.585µs"
	E1114 16:06:41.330271       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:06:41.793271       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:07:11.335906       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:07:11.803668       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:07:41.341437       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:07:41.813049       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:08:11.357882       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:08:11.822276       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:08:41.363555       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:08:41.832501       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:09:11.368754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:09:11.844839       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874] <==
	* I1114 16:00:15.544834       1 server_others.go:69] "Using iptables proxy"
	I1114 16:00:15.655801       1 node.go:141] Successfully retrieved node IP: 192.168.50.251
	I1114 16:00:15.704386       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 16:00:15.704459       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 16:00:15.707500       1 server_others.go:152] "Using iptables Proxier"
	I1114 16:00:15.707652       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 16:00:15.707870       1 server.go:846] "Version info" version="v1.28.3"
	I1114 16:00:15.707884       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 16:00:15.708887       1 config.go:188] "Starting service config controller"
	I1114 16:00:15.709166       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 16:00:15.709228       1 config.go:97] "Starting endpoint slice config controller"
	I1114 16:00:15.709234       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 16:00:15.710115       1 config.go:315] "Starting node config controller"
	I1114 16:00:15.710154       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 16:00:15.809705       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 16:00:15.809770       1 shared_informer.go:318] Caches are synced for service config
	I1114 16:00:15.815309       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30] <==
	* W1114 15:59:54.958638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:54.958674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1114 15:59:54.958744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:59:54.958756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 15:59:54.958942       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 15:59:54.959038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 15:59:55.819846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 15:59:55.819937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1114 15:59:55.838323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 15:59:55.838351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1114 15:59:55.882261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 15:59:55.882352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 15:59:55.891560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:55.891629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 15:59:55.913017       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 15:59:55.913070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 15:59:56.066763       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 15:59:56.066899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1114 15:59:56.085479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:56.085603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1114 15:59:56.138358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:59:56.138499       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 15:59:56.350026       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 15:59:56.350110       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1114 15:59:58.627477       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:54:33 UTC, ends at Tue 2023-11-14 16:09:19 UTC. --
	Nov 14 16:06:43 no-preload-490998 kubelet[4216]: E1114 16:06:43.562765    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:06:54 no-preload-490998 kubelet[4216]: E1114 16:06:54.564636    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:06:58 no-preload-490998 kubelet[4216]: E1114 16:06:58.677049    4216 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:06:58 no-preload-490998 kubelet[4216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:06:58 no-preload-490998 kubelet[4216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:06:58 no-preload-490998 kubelet[4216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:07:08 no-preload-490998 kubelet[4216]: E1114 16:07:08.564255    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:07:19 no-preload-490998 kubelet[4216]: E1114 16:07:19.563129    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:07:31 no-preload-490998 kubelet[4216]: E1114 16:07:31.562796    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:07:45 no-preload-490998 kubelet[4216]: E1114 16:07:45.562201    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:07:58 no-preload-490998 kubelet[4216]: E1114 16:07:58.678082    4216 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:07:58 no-preload-490998 kubelet[4216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:07:58 no-preload-490998 kubelet[4216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:07:58 no-preload-490998 kubelet[4216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:07:59 no-preload-490998 kubelet[4216]: E1114 16:07:59.563551    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:08:13 no-preload-490998 kubelet[4216]: E1114 16:08:13.564477    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:08:28 no-preload-490998 kubelet[4216]: E1114 16:08:28.563326    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:08:41 no-preload-490998 kubelet[4216]: E1114 16:08:41.562063    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:08:55 no-preload-490998 kubelet[4216]: E1114 16:08:55.563159    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:08:58 no-preload-490998 kubelet[4216]: E1114 16:08:58.675725    4216 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:08:58 no-preload-490998 kubelet[4216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:08:58 no-preload-490998 kubelet[4216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:08:58 no-preload-490998 kubelet[4216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:09:07 no-preload-490998 kubelet[4216]: E1114 16:09:07.563420    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:09:18 no-preload-490998 kubelet[4216]: E1114 16:09:18.563045    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	
	* 
	* ==> storage-provisioner [c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b] <==
	* I1114 16:00:15.520259       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 16:00:15.534900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 16:00:15.535343       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 16:00:15.550258       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 16:00:15.550701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-490998_fe5af1c2-ba49-4b80-8dd0-8ceb66467d8d!
	I1114 16:00:15.556584       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbb40898-897a-4836-aaa9-fe3ebbe609bf", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-490998_fe5af1c2-ba49-4b80-8dd0-8ceb66467d8d became leader
	I1114 16:00:15.651102       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-490998_fe5af1c2-ba49-4b80-8dd0-8ceb66467d8d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-490998 -n no-preload-490998
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-490998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-cljst
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-490998 describe pod metrics-server-57f55c9bc5-cljst
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-490998 describe pod metrics-server-57f55c9bc5-cljst: exit status 1 (75.938811ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-cljst" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-490998 describe pod metrics-server-57f55c9bc5-cljst: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1114 16:00:55.158894  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 16:01:27.620877  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 16:01:34.577539  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 16:01:36.376654  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 16:02:18.206375  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 16:02:21.221206  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 16:02:50.674612  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 16:02:59.422481  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 16:03:22.913007  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 16:03:44.267231  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 16:03:48.692312  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 16:03:52.668805  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 16:03:53.607189  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 16:04:39.653106  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 16:04:45.957756  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 16:05:11.736160  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 16:05:15.715017  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 16:05:16.653734  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 16:05:55.158495  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 16:06:02.696232  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 16:06:27.620773  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 16:06:34.577136  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 16:06:36.376334  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 16:07:21.221828  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-842105 -n old-k8s-version-842105
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-14 16:09:55.301253089 +0000 UTC m=+5462.831438044
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-842105 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-842105 logs -n 25: (1.632090015s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-331502 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | disable-driver-mounts-331502                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:47 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-490998             | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-279880            | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-842105        | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-529430  | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC | 14 Nov 23 15:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC |                     |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-490998                  | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-279880                 | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 15:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-842105             | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-529430       | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 15:59 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:49:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:49:49.997953  876668 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:49:49.998137  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998147  876668 out.go:309] Setting ErrFile to fd 2...
	I1114 15:49:49.998152  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998369  876668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:49:49.998978  876668 out.go:303] Setting JSON to false
	I1114 15:49:50.000072  876668 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":45142,"bootTime":1699931848,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:49:50.000141  876668 start.go:138] virtualization: kvm guest
	I1114 15:49:50.002690  876668 out.go:177] * [default-k8s-diff-port-529430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:49:50.004392  876668 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:49:50.004396  876668 notify.go:220] Checking for updates...
	I1114 15:49:50.006193  876668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:49:50.007844  876668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:49:50.009232  876668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:49:50.010572  876668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:49:50.011857  876668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:49:50.013604  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:49:50.014059  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.014149  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.028903  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I1114 15:49:50.029290  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.029869  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.029892  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.030244  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.030474  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.030753  876668 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:49:50.031049  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.031096  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.045696  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I1114 15:49:50.046117  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.046625  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.046658  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.047069  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.047303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.082731  876668 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:49:50.084362  876668 start.go:298] selected driver: kvm2
	I1114 15:49:50.084384  876668 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.084517  876668 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:49:50.085533  876668 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.085625  876668 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:49:50.100834  876668 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:49:50.101226  876668 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:49:50.101308  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:49:50.101328  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:49:50.101342  876668 start_flags.go:323] config:
	{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-52943
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.101540  876668 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.103413  876668 out.go:177] * Starting control plane node default-k8s-diff-port-529430 in cluster default-k8s-diff-port-529430
	I1114 15:49:49.196989  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:52.269051  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:50.104763  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:49:50.104815  876668 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:49:50.104835  876668 cache.go:56] Caching tarball of preloaded images
	I1114 15:49:50.104932  876668 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:49:50.104946  876668 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:49:50.105089  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:49:50.105307  876668 start.go:365] acquiring machines lock for default-k8s-diff-port-529430: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:49:58.349061  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:01.421017  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:07.501030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:10.573058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:16.653093  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:19.725040  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:25.805047  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:28.877039  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:34.957084  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:38.029008  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:44.109068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:47.181018  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:53.261065  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:56.333048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:02.413048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:05.485063  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:11.565034  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:14.636996  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:20.717050  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:23.789097  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:29.869058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:32.941066  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:39.021029  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:42.093064  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:48.173074  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:51.245007  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:57.325014  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:00.397111  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:06.477052  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:09.549016  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:15.629105  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:18.701000  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:24.781004  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:27.853046  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:33.933030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:37.005067  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:43.085068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:46.157044  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:52.237056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:55.309080  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:01.389056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:04.461005  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:10.541083  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:13.613033  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:16.617368  876220 start.go:369] acquired machines lock for "embed-certs-279880" in 4m25.691009916s
	I1114 15:53:16.617492  876220 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:16.617500  876220 fix.go:54] fixHost starting: 
	I1114 15:53:16.617993  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:16.618029  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:16.633223  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1114 15:53:16.633787  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:16.634385  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:53:16.634412  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:16.634743  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:16.634958  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:16.635120  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:53:16.636933  876220 fix.go:102] recreateIfNeeded on embed-certs-279880: state=Stopped err=<nil>
	I1114 15:53:16.636967  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	W1114 15:53:16.637164  876220 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:16.638727  876220 out.go:177] * Restarting existing kvm2 VM for "embed-certs-279880" ...
	I1114 15:53:16.615062  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:16.615116  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:53:16.617147  876065 machine.go:91] provisioned docker machine in 4m37.399136623s
	I1114 15:53:16.617196  876065 fix.go:56] fixHost completed within 4m37.422027817s
	I1114 15:53:16.617203  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 4m37.422123699s
	W1114 15:53:16.617282  876065 start.go:691] error starting host: provision: host is not running
	W1114 15:53:16.617491  876065 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1114 15:53:16.617502  876065 start.go:706] Will try again in 5 seconds ...
	I1114 15:53:16.640137  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Start
	I1114 15:53:16.640330  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring networks are active...
	I1114 15:53:16.641029  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network default is active
	I1114 15:53:16.641386  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network mk-embed-certs-279880 is active
	I1114 15:53:16.641738  876220 main.go:141] libmachine: (embed-certs-279880) Getting domain xml...
	I1114 15:53:16.642488  876220 main.go:141] libmachine: (embed-certs-279880) Creating domain...
	I1114 15:53:17.858298  876220 main.go:141] libmachine: (embed-certs-279880) Waiting to get IP...
	I1114 15:53:17.859506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:17.859912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:17.860039  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:17.859881  877182 retry.go:31] will retry after 225.269159ms: waiting for machine to come up
	I1114 15:53:18.086611  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.087099  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.087135  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.087062  877182 retry.go:31] will retry after 322.840106ms: waiting for machine to come up
	I1114 15:53:18.411781  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.412238  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.412278  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.412127  877182 retry.go:31] will retry after 459.77881ms: waiting for machine to come up
	I1114 15:53:18.873994  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.874393  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.874433  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.874341  877182 retry.go:31] will retry after 460.123636ms: waiting for machine to come up
	I1114 15:53:19.335916  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.336488  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.336520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.336414  877182 retry.go:31] will retry after 526.141665ms: waiting for machine to come up
	I1114 15:53:19.864336  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.864830  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.864856  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.864766  877182 retry.go:31] will retry after 817.261813ms: waiting for machine to come up
	I1114 15:53:20.683806  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:20.684289  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:20.684309  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:20.684244  877182 retry.go:31] will retry after 1.026381849s: waiting for machine to come up
	I1114 15:53:21.619196  876065 start.go:365] acquiring machines lock for no-preload-490998: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:53:21.712760  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:21.713237  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:21.713263  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:21.713201  877182 retry.go:31] will retry after 1.088672482s: waiting for machine to come up
	I1114 15:53:22.803222  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:22.803698  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:22.803734  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:22.803639  877182 retry.go:31] will retry after 1.394534659s: waiting for machine to come up
	I1114 15:53:24.199372  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:24.199764  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:24.199794  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:24.199706  877182 retry.go:31] will retry after 1.511094366s: waiting for machine to come up
	I1114 15:53:25.713650  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:25.714062  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:25.714107  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:25.713980  877182 retry.go:31] will retry after 1.821074261s: waiting for machine to come up
	I1114 15:53:27.536875  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:27.537423  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:27.537458  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:27.537349  877182 retry.go:31] will retry after 2.856840662s: waiting for machine to come up
	I1114 15:53:30.395562  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:30.396059  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:30.396086  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:30.396007  877182 retry.go:31] will retry after 4.003431067s: waiting for machine to come up
	I1114 15:53:35.689894  876396 start.go:369] acquired machines lock for "old-k8s-version-842105" in 4m23.221865246s
	I1114 15:53:35.689964  876396 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:35.689973  876396 fix.go:54] fixHost starting: 
	I1114 15:53:35.690410  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:35.690446  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:35.709418  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I1114 15:53:35.709816  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:35.710366  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:53:35.710400  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:35.710760  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:35.710946  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:35.711101  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:53:35.712666  876396 fix.go:102] recreateIfNeeded on old-k8s-version-842105: state=Stopped err=<nil>
	I1114 15:53:35.712696  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	W1114 15:53:35.712882  876396 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:35.715357  876396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-842105" ...
	I1114 15:53:34.403163  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.403706  876220 main.go:141] libmachine: (embed-certs-279880) Found IP for machine: 192.168.39.147
	I1114 15:53:34.403737  876220 main.go:141] libmachine: (embed-certs-279880) Reserving static IP address...
	I1114 15:53:34.403757  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has current primary IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.404290  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.404318  876220 main.go:141] libmachine: (embed-certs-279880) DBG | skip adding static IP to network mk-embed-certs-279880 - found existing host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"}
	I1114 15:53:34.404331  876220 main.go:141] libmachine: (embed-certs-279880) Reserved static IP address: 192.168.39.147
	I1114 15:53:34.404343  876220 main.go:141] libmachine: (embed-certs-279880) Waiting for SSH to be available...
	I1114 15:53:34.404351  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Getting to WaitForSSH function...
	I1114 15:53:34.406833  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407219  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.407248  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407367  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH client type: external
	I1114 15:53:34.407412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa (-rw-------)
	I1114 15:53:34.407481  876220 main.go:141] libmachine: (embed-certs-279880) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:34.407525  876220 main.go:141] libmachine: (embed-certs-279880) DBG | About to run SSH command:
	I1114 15:53:34.407551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | exit 0
	I1114 15:53:34.504225  876220 main.go:141] libmachine: (embed-certs-279880) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:34.504696  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetConfigRaw
	I1114 15:53:34.505414  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.508202  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.508632  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.508685  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.509034  876220 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/config.json ...
	I1114 15:53:34.509282  876220 machine.go:88] provisioning docker machine ...
	I1114 15:53:34.509309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:34.509521  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509730  876220 buildroot.go:166] provisioning hostname "embed-certs-279880"
	I1114 15:53:34.509758  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509894  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.511987  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512285  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.512317  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512472  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.512629  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512751  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512925  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.513118  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.513578  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.513594  876220 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-279880 && echo "embed-certs-279880" | sudo tee /etc/hostname
	I1114 15:53:34.664546  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-279880
	
	I1114 15:53:34.664595  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.667537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.667908  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.667941  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.668142  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.668388  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668631  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668910  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.669142  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.669652  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.669684  876220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-279880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-279880/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-279880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:34.810710  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:34.810745  876220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:34.810768  876220 buildroot.go:174] setting up certificates
	I1114 15:53:34.810780  876220 provision.go:83] configureAuth start
	I1114 15:53:34.810788  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.811138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.814056  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.814537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814747  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.817131  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817513  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.817544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817675  876220 provision.go:138] copyHostCerts
	I1114 15:53:34.817774  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:34.817789  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:34.817869  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:34.817990  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:34.818006  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:34.818039  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:34.818117  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:34.818129  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:34.818161  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:34.818226  876220 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.embed-certs-279880 san=[192.168.39.147 192.168.39.147 localhost 127.0.0.1 minikube embed-certs-279880]
	I1114 15:53:34.925955  876220 provision.go:172] copyRemoteCerts
	I1114 15:53:34.926014  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:34.926039  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.928954  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929322  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.929346  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929520  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.929703  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.929866  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.930033  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.026199  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:35.049682  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1114 15:53:35.072415  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:53:35.097200  876220 provision.go:86] duration metric: configureAuth took 286.405404ms
	I1114 15:53:35.097226  876220 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:35.097425  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:53:35.097558  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.100561  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.100912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.100965  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.101091  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.101296  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101500  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101641  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.101795  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.102165  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.102198  876220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:35.411682  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:35.411719  876220 machine.go:91] provisioned docker machine in 902.419916ms
	I1114 15:53:35.411733  876220 start.go:300] post-start starting for "embed-certs-279880" (driver="kvm2")
	I1114 15:53:35.411748  876220 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:35.411770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.412161  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:35.412201  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.415071  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.415551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.415849  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.416000  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.416143  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.512565  876220 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:35.517087  876220 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:35.517146  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:35.517235  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:35.517356  876220 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:35.517511  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:35.527797  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:35.552798  876220 start.go:303] post-start completed in 141.045785ms
	I1114 15:53:35.552827  876220 fix.go:56] fixHost completed within 18.935326604s
	I1114 15:53:35.552855  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.555540  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.555930  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.555970  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.556155  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.556390  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556573  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.557007  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.557338  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.557348  876220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:35.689729  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977215.639237319
	
	I1114 15:53:35.689759  876220 fix.go:206] guest clock: 1699977215.639237319
	I1114 15:53:35.689769  876220 fix.go:219] Guest: 2023-11-14 15:53:35.639237319 +0000 UTC Remote: 2023-11-14 15:53:35.552830911 +0000 UTC m=+284.779127994 (delta=86.406408ms)
	I1114 15:53:35.689801  876220 fix.go:190] guest clock delta is within tolerance: 86.406408ms
	I1114 15:53:35.689807  876220 start.go:83] releasing machines lock for "embed-certs-279880", held for 19.072338997s
	I1114 15:53:35.689842  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.690197  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:35.692786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693260  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.693311  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693440  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694011  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694222  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694338  876220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:35.694404  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.694455  876220 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:35.694484  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.697198  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697220  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697702  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697732  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697771  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697865  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698085  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698088  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698297  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698303  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698438  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698562  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.698974  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.813318  876220 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:35.819124  876220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:35.957038  876220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:35.964876  876220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:35.964984  876220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:35.980758  876220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:35.980780  876220 start.go:472] detecting cgroup driver to use...
	I1114 15:53:35.980848  876220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:35.993968  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:36.006564  876220 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:36.006626  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:36.021314  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:36.035842  876220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:36.147617  876220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:36.268025  876220 docker.go:219] disabling docker service ...
	I1114 15:53:36.268113  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:36.280847  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:36.292659  876220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:36.414923  876220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:36.534481  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:36.547652  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:36.565229  876220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:53:36.565312  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.574949  876220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:36.575030  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.585105  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.594790  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.603613  876220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:36.613116  876220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:36.620828  876220 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:36.620884  876220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:36.632600  876220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:36.642150  876220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:36.756773  876220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:36.929381  876220 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:36.929467  876220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:36.934735  876220 start.go:540] Will wait 60s for crictl version
	I1114 15:53:36.934790  876220 ssh_runner.go:195] Run: which crictl
	I1114 15:53:36.940182  876220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:36.991630  876220 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:36.991718  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.045160  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.097281  876220 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:53:35.716835  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Start
	I1114 15:53:35.716987  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring networks are active...
	I1114 15:53:35.717715  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network default is active
	I1114 15:53:35.718030  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network mk-old-k8s-version-842105 is active
	I1114 15:53:35.718429  876396 main.go:141] libmachine: (old-k8s-version-842105) Getting domain xml...
	I1114 15:53:35.719055  876396 main.go:141] libmachine: (old-k8s-version-842105) Creating domain...
	I1114 15:53:36.991860  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting to get IP...
	I1114 15:53:36.992911  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:36.993376  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:36.993427  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:36.993318  877295 retry.go:31] will retry after 227.553321ms: waiting for machine to come up
	I1114 15:53:37.223023  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.223561  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.223629  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.223511  877295 retry.go:31] will retry after 308.951372ms: waiting for machine to come up
	I1114 15:53:37.098693  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:37.102205  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102676  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:37.102710  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102955  876220 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:37.107113  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:37.120009  876220 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:53:37.120075  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:37.160178  876220 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:53:37.160241  876220 ssh_runner.go:195] Run: which lz4
	I1114 15:53:37.164351  876220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:53:37.168645  876220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:53:37.168684  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:53:39.026796  876220 crio.go:444] Took 1.862508 seconds to copy over tarball
	I1114 15:53:39.026876  876220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:53:37.534243  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.534797  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.534827  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.534774  877295 retry.go:31] will retry after 440.76682ms: waiting for machine to come up
	I1114 15:53:37.977712  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.978257  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.978287  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.978207  877295 retry.go:31] will retry after 402.601155ms: waiting for machine to come up
	I1114 15:53:38.383001  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.383515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.383551  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.383468  877295 retry.go:31] will retry after 580.977501ms: waiting for machine to come up
	I1114 15:53:38.966457  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.967088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.967121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.967026  877295 retry.go:31] will retry after 679.65563ms: waiting for machine to come up
	I1114 15:53:39.648086  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:39.648560  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:39.648593  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:39.648501  877295 retry.go:31] will retry after 1.014858956s: waiting for machine to come up
	I1114 15:53:40.664728  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:40.665285  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:40.665321  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:40.665230  877295 retry.go:31] will retry after 1.035036164s: waiting for machine to come up
	I1114 15:53:41.701639  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:41.702088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:41.702123  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:41.702029  877295 retry.go:31] will retry after 1.15711647s: waiting for machine to come up
	I1114 15:53:41.885259  876220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.858355323s)
	I1114 15:53:41.885288  876220 crio.go:451] Took 2.858463 seconds to extract the tarball
	I1114 15:53:41.885300  876220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:53:41.926498  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:41.972943  876220 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:53:41.972980  876220 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:53:41.973053  876220 ssh_runner.go:195] Run: crio config
	I1114 15:53:42.038006  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:53:42.038032  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:53:42.038053  876220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:53:42.038071  876220 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-279880 NodeName:embed-certs-279880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:53:42.038234  876220 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-279880"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:53:42.038323  876220 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-279880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:53:42.038394  876220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:53:42.050379  876220 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:53:42.050462  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:53:42.058921  876220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1114 15:53:42.074304  876220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:53:42.090403  876220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1114 15:53:42.106412  876220 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I1114 15:53:42.109907  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:42.122915  876220 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880 for IP: 192.168.39.147
	I1114 15:53:42.122945  876220 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:53:42.123106  876220 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:53:42.123148  876220 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:53:42.123226  876220 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/client.key
	I1114 15:53:42.123290  876220 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key.a88b087d
	I1114 15:53:42.123322  876220 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key
	I1114 15:53:42.123430  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:53:42.123456  876220 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:53:42.123467  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:53:42.123486  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:53:42.123517  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:53:42.123541  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:53:42.123584  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:42.124261  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:53:42.149787  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:53:42.177563  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:53:42.203326  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:53:42.228832  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:53:42.254674  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:53:42.280548  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:53:42.305318  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:53:42.331461  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:53:42.356555  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:53:42.382688  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:53:42.407945  876220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:53:42.424902  876220 ssh_runner.go:195] Run: openssl version
	I1114 15:53:42.430411  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:53:42.443033  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448429  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448496  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.455631  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:53:42.466421  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:53:42.476013  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480381  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480434  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.486048  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:53:42.495375  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:53:42.505366  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509762  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509804  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.515519  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:53:42.524838  876220 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:53:42.528912  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:53:42.534641  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:53:42.540138  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:53:42.545849  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:53:42.551518  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:53:42.559001  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:53:42.566135  876220 kubeadm.go:404] StartCluster: {Name:embed-certs-279880 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:53:42.566241  876220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:53:42.566297  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:42.613075  876220 cri.go:89] found id: ""
	I1114 15:53:42.613158  876220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:53:42.622675  876220 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:53:42.622696  876220 kubeadm.go:636] restartCluster start
	I1114 15:53:42.622785  876220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:53:42.631529  876220 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.633202  876220 kubeconfig.go:92] found "embed-certs-279880" server: "https://192.168.39.147:8443"
	I1114 15:53:42.636588  876220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:53:42.645531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.645578  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.656733  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.656764  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.656807  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.667524  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.168290  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.168372  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.181051  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.668650  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.668772  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.681727  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.168359  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.168462  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.182012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.668666  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.668763  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.680872  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.168505  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.168625  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.180321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.667875  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.668016  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.680318  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.861352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:42.861900  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:42.861963  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:42.861836  877295 retry.go:31] will retry after 2.117184279s: waiting for machine to come up
	I1114 15:53:44.982059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:44.982506  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:44.982538  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:44.982449  877295 retry.go:31] will retry after 2.3999215s: waiting for machine to come up
	I1114 15:53:46.168271  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.168410  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.180809  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:46.667886  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.668009  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.679468  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.168072  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.168204  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.180268  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.667786  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.667948  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.678927  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.168531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.168660  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.180004  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.668597  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.668752  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.680945  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.168543  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.168635  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.180012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.668382  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.668486  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.683691  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.168265  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.168353  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.179169  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.667618  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.667730  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.678707  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.384177  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:47.384695  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:47.384734  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:47.384649  877295 retry.go:31] will retry after 2.820309413s: waiting for machine to come up
	I1114 15:53:50.208736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:50.209188  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:50.209221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:50.209130  877295 retry.go:31] will retry after 2.822648093s: waiting for machine to come up
	I1114 15:53:51.168046  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.168144  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.179168  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:51.668301  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.668407  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.680321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.167971  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:52.168062  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:52.179159  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.645656  876220 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:53:52.645688  876220 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:53:52.645702  876220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:53:52.645806  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:52.682368  876220 cri.go:89] found id: ""
	I1114 15:53:52.682482  876220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:53:52.697186  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:53:52.705449  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:53:52.705503  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714019  876220 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714054  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:52.831334  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.796131  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.984427  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.060195  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.137132  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:53:54.137217  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.155040  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.676264  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.176129  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.676614  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:53.034614  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:53.035044  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:53.035078  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:53.034993  877295 retry.go:31] will retry after 4.160398149s: waiting for machine to come up
	I1114 15:53:57.196776  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197211  876396 main.go:141] libmachine: (old-k8s-version-842105) Found IP for machine: 192.168.72.151
	I1114 15:53:57.197240  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserving static IP address...
	I1114 15:53:57.197260  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has current primary IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197667  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.197700  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserved static IP address: 192.168.72.151
	I1114 15:53:57.197724  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | skip adding static IP to network mk-old-k8s-version-842105 - found existing host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"}
	I1114 15:53:57.197742  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting for SSH to be available...
	I1114 15:53:57.197754  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Getting to WaitForSSH function...
	I1114 15:53:57.200279  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200646  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.200681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200916  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH client type: external
	I1114 15:53:57.200948  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa (-rw-------)
	I1114 15:53:57.200983  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:57.200999  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | About to run SSH command:
	I1114 15:53:57.201015  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | exit 0
	I1114 15:53:57.288554  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:57.288904  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetConfigRaw
	I1114 15:53:57.289691  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.292087  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292445  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.292501  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292720  876396 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/config.json ...
	I1114 15:53:57.292930  876396 machine.go:88] provisioning docker machine ...
	I1114 15:53:57.292950  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:57.293164  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293326  876396 buildroot.go:166] provisioning hostname "old-k8s-version-842105"
	I1114 15:53:57.293352  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293472  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.295765  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296139  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.296170  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296299  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.296470  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296625  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296768  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.296945  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.297524  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.297546  876396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-842105 && echo "old-k8s-version-842105" | sudo tee /etc/hostname
	I1114 15:53:58.537304  876668 start.go:369] acquired machines lock for "default-k8s-diff-port-529430" in 4m8.43196122s
	I1114 15:53:58.537380  876668 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:58.537392  876668 fix.go:54] fixHost starting: 
	I1114 15:53:58.537828  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:58.537865  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:58.555361  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I1114 15:53:58.555809  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:58.556346  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:53:58.556379  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:58.556762  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:58.556993  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:53:58.557144  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:53:58.558707  876668 fix.go:102] recreateIfNeeded on default-k8s-diff-port-529430: state=Stopped err=<nil>
	I1114 15:53:58.558736  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	W1114 15:53:58.558888  876668 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:58.561175  876668 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-529430" ...
	I1114 15:53:57.423888  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-842105
	
	I1114 15:53:57.423971  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.427115  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427421  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.427459  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427660  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.427882  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428135  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428351  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.428584  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.429089  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.429124  876396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-842105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-842105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-842105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:57.554847  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:57.554893  876396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:57.554957  876396 buildroot.go:174] setting up certificates
	I1114 15:53:57.554974  876396 provision.go:83] configureAuth start
	I1114 15:53:57.554989  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.555342  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.558305  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.558711  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558876  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.561568  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.561937  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.561973  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.562106  876396 provision.go:138] copyHostCerts
	I1114 15:53:57.562196  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:57.562218  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:57.562284  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:57.562402  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:57.562413  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:57.562445  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:57.562520  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:57.562532  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:57.562561  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:57.562631  876396 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-842105 san=[192.168.72.151 192.168.72.151 localhost 127.0.0.1 minikube old-k8s-version-842105]
	I1114 15:53:57.825621  876396 provision.go:172] copyRemoteCerts
	I1114 15:53:57.825706  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:57.825739  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.828352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828732  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.828778  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828924  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.829159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.829356  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.829505  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:57.913614  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:57.935960  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 15:53:57.957927  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:53:57.980061  876396 provision.go:86] duration metric: configureAuth took 425.071777ms
	I1114 15:53:57.980109  876396 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:57.980308  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:53:57.980405  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.983736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984128  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.984161  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984367  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.984574  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984927  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.985116  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.985478  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.985505  876396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:58.297063  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:58.297107  876396 machine.go:91] provisioned docker machine in 1.004160647s
	I1114 15:53:58.297121  876396 start.go:300] post-start starting for "old-k8s-version-842105" (driver="kvm2")
	I1114 15:53:58.297135  876396 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:58.297159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.297578  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:58.297624  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.300608  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301051  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.301081  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301312  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.301485  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.301655  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.301774  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.387785  876396 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:58.391947  876396 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:58.391974  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:58.392056  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:58.392177  876396 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:58.392301  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:58.401525  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:58.422853  876396 start.go:303] post-start completed in 125.713467ms
	I1114 15:53:58.422892  876396 fix.go:56] fixHost completed within 22.732917848s
	I1114 15:53:58.422922  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.425682  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.426098  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426282  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.426487  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426663  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426830  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.427040  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:58.427400  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:58.427416  876396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:58.537121  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977238.485050071
	
	I1114 15:53:58.537151  876396 fix.go:206] guest clock: 1699977238.485050071
	I1114 15:53:58.537161  876396 fix.go:219] Guest: 2023-11-14 15:53:58.485050071 +0000 UTC Remote: 2023-11-14 15:53:58.422897103 +0000 UTC m=+286.112017318 (delta=62.152968ms)
	I1114 15:53:58.537187  876396 fix.go:190] guest clock delta is within tolerance: 62.152968ms
	I1114 15:53:58.537206  876396 start.go:83] releasing machines lock for "old-k8s-version-842105", held for 22.847251095s
	I1114 15:53:58.537248  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.537548  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:58.540515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.540932  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.540974  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.541110  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541612  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541912  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.542012  876396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:58.542077  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.542171  876396 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:58.542202  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.544841  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545190  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545246  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545465  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.545666  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.545694  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545714  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545816  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.545905  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.546006  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.546067  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.546212  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.546365  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.626301  876396 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:58.651834  876396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:58.799770  876396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:58.806042  876396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:58.806134  876396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:58.824707  876396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:58.824752  876396 start.go:472] detecting cgroup driver to use...
	I1114 15:53:58.824824  876396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:58.840144  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:58.854846  876396 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:58.854905  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:58.869926  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:58.883517  876396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:58.990519  876396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:59.108637  876396 docker.go:219] disabling docker service ...
	I1114 15:53:59.108712  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:59.124681  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:59.138748  876396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:59.260422  876396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:59.364365  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:59.376773  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:59.394948  876396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1114 15:53:59.395027  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.404000  876396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:59.404074  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.412822  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.421316  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.429685  876396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:59.438818  876396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:59.446459  876396 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:59.446533  876396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:59.459160  876396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:59.467670  876396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:59.579125  876396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:59.794436  876396 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:59.794525  876396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:59.801013  876396 start.go:540] Will wait 60s for crictl version
	I1114 15:53:59.801095  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:53:59.805735  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:59.851270  876396 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:59.851383  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.898885  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.953911  876396 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1114 15:53:58.562788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Start
	I1114 15:53:58.562971  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring networks are active...
	I1114 15:53:58.563570  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network default is active
	I1114 15:53:58.564001  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network mk-default-k8s-diff-port-529430 is active
	I1114 15:53:58.564406  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Getting domain xml...
	I1114 15:53:58.565186  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Creating domain...
	I1114 15:53:59.907130  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting to get IP...
	I1114 15:53:59.908507  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.908991  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.909128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:53:59.908977  877437 retry.go:31] will retry after 306.122553ms: waiting for machine to come up
	I1114 15:53:56.176595  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.676568  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.699015  876220 api_server.go:72] duration metric: took 2.561885213s to wait for apiserver process to appear ...
	I1114 15:53:56.699041  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:53:56.699058  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:53:59.955466  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:59.959121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959545  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:59.959572  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959896  876396 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:59.965859  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:59.982494  876396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 15:53:59.982563  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:00.029401  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:00.029483  876396 ssh_runner.go:195] Run: which lz4
	I1114 15:54:00.034065  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:00.039738  876396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:00.039780  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1114 15:54:01.846049  876396 crio.go:444] Took 1.812024 seconds to copy over tarball
	I1114 15:54:01.846160  876396 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:01.387625  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.387668  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.387690  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.430505  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.430539  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.930801  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.937138  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:01.937169  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.431712  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.442719  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:02.442758  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.931021  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.938062  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:54:02.947420  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:02.947453  876220 api_server.go:131] duration metric: took 6.248404315s to wait for apiserver health ...
	I1114 15:54:02.947465  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:54:02.947479  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:02.949231  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:00.216693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217419  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217476  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.217346  877437 retry.go:31] will retry after 276.469735ms: waiting for machine to come up
	I1114 15:54:00.496200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496596  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496632  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.496550  877437 retry.go:31] will retry after 390.20616ms: waiting for machine to come up
	I1114 15:54:00.888367  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889341  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.889235  877437 retry.go:31] will retry after 551.896336ms: waiting for machine to come up
	I1114 15:54:01.443159  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443794  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443825  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:01.443756  877437 retry.go:31] will retry after 655.228992ms: waiting for machine to come up
	I1114 15:54:02.100194  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100681  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100716  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.100609  877437 retry.go:31] will retry after 896.817469ms: waiting for machine to come up
	I1114 15:54:02.999296  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999947  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999979  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.999897  877437 retry.go:31] will retry after 1.177419274s: waiting for machine to come up
	I1114 15:54:04.178783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179425  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179452  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:04.179351  877437 retry.go:31] will retry after 1.259348434s: waiting for machine to come up
	I1114 15:54:02.950643  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:02.986775  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:03.054339  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:03.074346  876220 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:03.074405  876220 system_pods.go:61] "coredns-5dd5756b68-gqxld" [0b846e58-0bbc-4770-94a4-8324753b36c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:03.074428  876220 system_pods.go:61] "etcd-embed-certs-279880" [e085e7a7-ec2e-4cf6-bbb2-d242a5e8d075] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:03.074442  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [4ffbfbaf-9978-4bb1-9e4e-ef23365f78fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:03.074455  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [d895906c-899f-41b3-9484-1a6985b978f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:03.074471  876220 system_pods.go:61] "kube-proxy-j2qnm" [feee8604-a749-4908-8361-42f63d55ec64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:03.074485  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [4325a0ba-9013-4899-b01b-befcb4cd5b72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:03.074504  876220 system_pods.go:61] "metrics-server-57f55c9bc5-gvtbw" [a7c44219-4b00-49c0-817f-68f9499f1ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:03.074531  876220 system_pods.go:61] "storage-provisioner" [f464123e-8329-4785-87ae-78ff30ac7d27] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:03.074547  876220 system_pods.go:74] duration metric: took 20.179327ms to wait for pod list to return data ...
	I1114 15:54:03.074558  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:03.078482  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:03.078526  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:03.078542  876220 node_conditions.go:105] duration metric: took 3.972732ms to run NodePressure ...
	I1114 15:54:03.078565  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:03.514232  876220 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521097  876220 kubeadm.go:787] kubelet initialised
	I1114 15:54:03.521125  876220 kubeadm.go:788] duration metric: took 6.859971ms waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521168  876220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:03.528777  876220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:05.249338  876396 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.403140591s)
	I1114 15:54:05.249383  876396 crio.go:451] Took 3.403300 seconds to extract the tarball
	I1114 15:54:05.249397  876396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:05.298779  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:05.351838  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:05.351873  876396 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:05.352034  876396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.352124  876396 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.352201  876396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.352219  876396 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.352067  876396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.352087  876396 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354089  876396 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1114 15:54:05.354101  876396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.354115  876396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.354117  876396 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354097  876396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.354178  876396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.354197  876396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.354270  876396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.512829  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.521658  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.529228  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1114 15:54:05.529451  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.529597  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.529802  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.534672  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.613591  876396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1114 15:54:05.613650  876396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.613721  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.644613  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.668090  876396 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1114 15:54:05.668167  876396 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.668231  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.685343  876396 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1114 15:54:05.685398  876396 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1114 15:54:05.685458  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725459  876396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1114 15:54:05.725508  876396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.725523  876396 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1114 15:54:05.725561  876396 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.725565  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725602  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727180  876396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1114 15:54:05.727215  876396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.727249  876396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1114 15:54:05.727283  876396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.727254  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727322  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.727325  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.849608  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.849657  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1114 15:54:05.849694  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.849747  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.849753  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.849830  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1114 15:54:05.849847  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.990379  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1114 15:54:05.990536  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1114 15:54:06.006943  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1114 15:54:06.006966  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1114 15:54:06.007017  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1114 15:54:06.007076  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.007134  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1114 15:54:06.013121  876396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1114 15:54:06.013141  876396 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.013192  876396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1114 15:54:05.440685  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441307  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441342  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:05.441243  877437 retry.go:31] will retry after 1.84307404s: waiting for machine to come up
	I1114 15:54:07.286027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286581  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286612  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:07.286501  877437 retry.go:31] will retry after 2.149522769s: waiting for machine to come up
	I1114 15:54:09.437500  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.437998  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.438027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:09.437930  877437 retry.go:31] will retry after 1.825733531s: waiting for machine to come up
	I1114 15:54:06.558998  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.056443  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.550292  876220 pod_ready.go:92] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:09.550325  876220 pod_ready.go:81] duration metric: took 6.02152032s waiting for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:09.550338  876220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:07.587512  876396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.574275406s)
	I1114 15:54:07.587549  876396 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1114 15:54:07.587609  876396 cache_images.go:92] LoadImages completed in 2.235719587s
	W1114 15:54:07.587745  876396 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1114 15:54:07.587935  876396 ssh_runner.go:195] Run: crio config
	I1114 15:54:07.677561  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:07.677590  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:07.677624  876396 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:07.677649  876396 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-842105 NodeName:old-k8s-version-842105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 15:54:07.677852  876396 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-842105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-842105
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.151:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:07.677991  876396 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-842105 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:54:07.678072  876396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1114 15:54:07.690041  876396 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:07.690195  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:07.699428  876396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1114 15:54:07.717871  876396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:07.736451  876396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1114 15:54:07.760405  876396 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:07.766002  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:07.782987  876396 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105 for IP: 192.168.72.151
	I1114 15:54:07.783024  876396 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:07.783232  876396 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:07.783328  876396 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:07.783435  876396 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/client.key
	I1114 15:54:07.783530  876396 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key.8e16fdf2
	I1114 15:54:07.783587  876396 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key
	I1114 15:54:07.783733  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:07.783774  876396 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:07.783788  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:07.783825  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:07.783860  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:07.783903  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:07.783976  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:07.784951  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:07.817959  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:07.849497  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:07.882885  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:54:07.917706  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:07.951168  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:07.980449  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:08.004910  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:08.038634  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:08.068999  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:08.099934  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:08.131714  876396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:08.150662  876396 ssh_runner.go:195] Run: openssl version
	I1114 15:54:08.158258  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:08.168218  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173533  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173650  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.179886  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:08.189654  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:08.199563  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204439  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204512  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.210587  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:08.220509  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:08.233859  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240418  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240484  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.248025  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:08.261693  876396 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:08.267518  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:08.275553  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:08.283812  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:08.292063  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:08.299976  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:08.307726  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:08.315248  876396 kubeadm.go:404] StartCluster: {Name:old-k8s-version-842105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:08.315441  876396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:08.315509  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:08.373222  876396 cri.go:89] found id: ""
	I1114 15:54:08.373309  876396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:08.386081  876396 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:08.386113  876396 kubeadm.go:636] restartCluster start
	I1114 15:54:08.386175  876396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:08.398113  876396 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.399779  876396 kubeconfig.go:92] found "old-k8s-version-842105" server: "https://192.168.72.151:8443"
	I1114 15:54:08.403355  876396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:08.415044  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.415107  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.431221  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.431246  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.431301  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.441629  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.941906  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.942002  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.953895  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.442080  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.442167  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.454396  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.941960  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.942060  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.957741  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.442467  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.442585  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.459029  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.942110  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.942218  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.958207  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.441724  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.441846  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.456551  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.942092  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.942207  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.954734  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.265162  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265717  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:11.265645  877437 retry.go:31] will retry after 3.454522942s: waiting for machine to come up
	I1114 15:54:14.722448  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722869  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722900  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:14.722811  877437 retry.go:31] will retry after 4.385736497s: waiting for machine to come up
	I1114 15:54:11.568989  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:11.569021  876220 pod_ready.go:81] duration metric: took 2.018672405s waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:11.569032  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:13.599380  876220 pod_ready.go:102] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:15.095781  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.095806  876220 pod_ready.go:81] duration metric: took 3.52676767s waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.095816  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101837  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.101860  876220 pod_ready.go:81] duration metric: took 6.035008ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101871  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107099  876220 pod_ready.go:92] pod "kube-proxy-j2qnm" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.107119  876220 pod_ready.go:81] duration metric: took 5.239707ms waiting for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107131  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146726  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.146753  876220 pod_ready.go:81] duration metric: took 39.614218ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146765  876220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:12.442685  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.442780  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.456555  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:12.941805  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.941902  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.955572  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.442111  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.442220  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.455769  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.941932  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.942051  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.957167  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.442727  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.442855  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.455220  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.941815  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.941911  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.955030  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.441942  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.442064  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.454228  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.942207  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.942299  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.955845  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.442537  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.442642  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.454339  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.941837  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.941933  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.955292  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:19.110067  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.110621  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Found IP for machine: 192.168.61.196
	I1114 15:54:19.110650  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserving static IP address...
	I1114 15:54:19.110682  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has current primary IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.111082  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.111142  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | skip adding static IP to network mk-default-k8s-diff-port-529430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"}
	I1114 15:54:19.111163  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserved static IP address: 192.168.61.196
	I1114 15:54:19.111178  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for SSH to be available...
	I1114 15:54:19.111191  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Getting to WaitForSSH function...
	I1114 15:54:19.113739  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114145  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.114196  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114327  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH client type: external
	I1114 15:54:19.114358  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa (-rw-------)
	I1114 15:54:19.114395  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:19.114417  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | About to run SSH command:
	I1114 15:54:19.114432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | exit 0
	I1114 15:54:19.213651  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:19.214087  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetConfigRaw
	I1114 15:54:19.214767  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.217678  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218072  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.218099  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218414  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:54:19.218634  876668 machine.go:88] provisioning docker machine ...
	I1114 15:54:19.218662  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:19.218923  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219132  876668 buildroot.go:166] provisioning hostname "default-k8s-diff-port-529430"
	I1114 15:54:19.219155  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219292  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.221719  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222106  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.222129  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222272  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.222435  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222606  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222748  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.222907  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.223312  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.223328  876668 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-529430 && echo "default-k8s-diff-port-529430" | sudo tee /etc/hostname
	I1114 15:54:19.373658  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-529430
	
	I1114 15:54:19.373691  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.376972  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377388  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.377432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377549  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.377754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.377934  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.378123  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.378325  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.378667  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.378685  876668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-529430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-529430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-529430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:19.523410  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:19.523453  876668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:19.523498  876668 buildroot.go:174] setting up certificates
	I1114 15:54:19.523511  876668 provision.go:83] configureAuth start
	I1114 15:54:19.523530  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.523872  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.526757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527213  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.527242  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.530193  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530590  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.530630  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530794  876668 provision.go:138] copyHostCerts
	I1114 15:54:19.530862  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:19.530886  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:19.530965  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:19.531069  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:19.531078  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:19.531104  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:19.531179  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:19.531188  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:19.531218  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:19.531285  876668 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-529430 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube default-k8s-diff-port-529430]
	I1114 15:54:19.845785  876668 provision.go:172] copyRemoteCerts
	I1114 15:54:19.845852  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:19.845880  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.849070  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849461  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.849492  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.849916  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.850139  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.850326  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:19.946041  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:19.976301  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1114 15:54:20.667697  876065 start.go:369] acquired machines lock for "no-preload-490998" in 59.048435079s
	I1114 15:54:20.667765  876065 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:54:20.667776  876065 fix.go:54] fixHost starting: 
	I1114 15:54:20.668233  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:20.668278  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:20.689041  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I1114 15:54:20.689574  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:20.690138  876065 main.go:141] libmachine: Using API Version  1
	I1114 15:54:20.690168  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:20.690554  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:20.690760  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:20.690909  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 15:54:20.692627  876065 fix.go:102] recreateIfNeeded on no-preload-490998: state=Stopped err=<nil>
	I1114 15:54:20.692652  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	W1114 15:54:20.692849  876065 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:54:20.694674  876065 out.go:177] * Restarting existing kvm2 VM for "no-preload-490998" ...
	I1114 15:54:17.454958  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:19.455250  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:20.001972  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:54:20.026531  876668 provision.go:86] duration metric: configureAuth took 502.998106ms
	I1114 15:54:20.026585  876668 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:20.026832  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:20.026965  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.030385  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.030791  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030974  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.031200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031423  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.031861  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.032341  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.032367  876668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:20.394771  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:20.394805  876668 machine.go:91] provisioned docker machine in 1.176155811s
	I1114 15:54:20.394818  876668 start.go:300] post-start starting for "default-k8s-diff-port-529430" (driver="kvm2")
	I1114 15:54:20.394832  876668 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:20.394853  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.395240  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:20.395288  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.398478  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.398906  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.398945  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.399107  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.399344  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.399584  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.399752  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.491251  876668 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:20.495507  876668 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:20.495538  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:20.495627  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:20.495718  876668 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:20.495814  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:20.504112  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:20.527100  876668 start.go:303] post-start completed in 132.264495ms
	I1114 15:54:20.527124  876668 fix.go:56] fixHost completed within 21.989733182s
	I1114 15:54:20.527150  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.530055  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530460  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.530502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530660  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.530868  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531069  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531281  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.531458  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.531874  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.531889  876668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:20.667502  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977260.612374456
	
	I1114 15:54:20.667529  876668 fix.go:206] guest clock: 1699977260.612374456
	I1114 15:54:20.667536  876668 fix.go:219] Guest: 2023-11-14 15:54:20.612374456 +0000 UTC Remote: 2023-11-14 15:54:20.527127621 +0000 UTC m=+270.585277055 (delta=85.246835ms)
	I1114 15:54:20.667591  876668 fix.go:190] guest clock delta is within tolerance: 85.246835ms
	I1114 15:54:20.667604  876668 start.go:83] releasing machines lock for "default-k8s-diff-port-529430", held for 22.130251397s
	I1114 15:54:20.667642  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.668017  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:20.671690  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672166  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.672199  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672583  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673190  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673412  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673507  876668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:20.673573  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.673677  876668 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:20.673702  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.677394  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677505  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677813  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.677847  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678133  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.678165  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678228  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678331  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678456  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678543  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678799  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.679008  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.770378  876668 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:20.799026  876668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:20.952410  876668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:20.960020  876668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:20.960164  876668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:20.976497  876668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:20.976537  876668 start.go:472] detecting cgroup driver to use...
	I1114 15:54:20.976623  876668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:20.995510  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:21.008750  876668 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:21.008824  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:21.021811  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:21.035329  876668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:21.148775  876668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:21.285242  876668 docker.go:219] disabling docker service ...
	I1114 15:54:21.285318  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:21.298782  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:21.316123  876668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:21.488090  876668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:21.618889  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:21.632974  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:21.655781  876668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:21.655882  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.669231  876668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:21.669316  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.678786  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.688193  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.698797  876668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:21.709360  876668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:21.718312  876668 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:21.718380  876668 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:21.736502  876668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:21.746439  876668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:21.863214  876668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:22.102179  876668 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:22.102265  876668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:22.108046  876668 start.go:540] Will wait 60s for crictl version
	I1114 15:54:22.108121  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:54:22.113795  876668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:22.165127  876668 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:22.165229  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.225931  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.294400  876668 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:17.442023  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.442115  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.454984  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:17.942288  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.942367  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.954587  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:18.415437  876396 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:18.415476  876396 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:18.415510  876396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:18.415594  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:18.457148  876396 cri.go:89] found id: ""
	I1114 15:54:18.457220  876396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:18.473763  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:18.482554  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:18.482618  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491282  876396 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491331  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:18.611750  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.639893  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.02808682s)
	I1114 15:54:19.639964  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.850775  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.939183  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:20.055296  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:20.055384  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.076978  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.591616  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.091982  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.591312  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.635294  876396 api_server.go:72] duration metric: took 1.579988958s to wait for apiserver process to appear ...
	I1114 15:54:21.635323  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:21.635345  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:20.696162  876065 main.go:141] libmachine: (no-preload-490998) Calling .Start
	I1114 15:54:20.696380  876065 main.go:141] libmachine: (no-preload-490998) Ensuring networks are active...
	I1114 15:54:20.697208  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network default is active
	I1114 15:54:20.697665  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network mk-no-preload-490998 is active
	I1114 15:54:20.698105  876065 main.go:141] libmachine: (no-preload-490998) Getting domain xml...
	I1114 15:54:20.698815  876065 main.go:141] libmachine: (no-preload-490998) Creating domain...
	I1114 15:54:22.152078  876065 main.go:141] libmachine: (no-preload-490998) Waiting to get IP...
	I1114 15:54:22.153475  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.153983  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.154071  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.153960  877583 retry.go:31] will retry after 305.242943ms: waiting for machine to come up
	I1114 15:54:22.460636  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.461432  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.461609  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.461568  877583 retry.go:31] will retry after 354.226558ms: waiting for machine to come up
	I1114 15:54:22.817225  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.817884  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.817999  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.817955  877583 retry.go:31] will retry after 337.727596ms: waiting for machine to come up
	I1114 15:54:23.157897  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.158614  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.158724  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.158679  877583 retry.go:31] will retry after 375.356441ms: waiting for machine to come up
	I1114 15:54:23.536061  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.536607  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.536633  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.536565  877583 retry.go:31] will retry after 652.853452ms: waiting for machine to come up
	I1114 15:54:22.295757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:22.299345  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.299749  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:22.299788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.300017  876668 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:22.305363  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:22.318715  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:22.318773  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:22.368522  876668 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:22.368595  876668 ssh_runner.go:195] Run: which lz4
	I1114 15:54:22.373798  876668 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:22.379337  876668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:22.379368  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:54:24.194028  876668 crio.go:444] Took 1.820276 seconds to copy over tarball
	I1114 15:54:24.194111  876668 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:21.457059  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:23.458432  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:26.636325  876396 api_server.go:269] stopped: https://192.168.72.151:8443/healthz: Get "https://192.168.72.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1114 15:54:26.636396  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:24.191080  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:24.191648  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:24.191685  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:24.191565  877583 retry.go:31] will retry after 883.93292ms: waiting for machine to come up
	I1114 15:54:25.076820  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:25.077325  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:25.077370  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:25.077290  877583 retry.go:31] will retry after 1.071889504s: waiting for machine to come up
	I1114 15:54:26.151239  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:26.151777  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:26.151812  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:26.151734  877583 retry.go:31] will retry after 1.05055701s: waiting for machine to come up
	I1114 15:54:27.204714  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:27.205193  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:27.205216  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:27.205147  877583 retry.go:31] will retry after 1.366779273s: waiting for machine to come up
	I1114 15:54:28.573131  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:28.573578  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:28.573605  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:28.573548  877583 retry.go:31] will retry after 1.629033633s: waiting for machine to come up
	I1114 15:54:27.635092  876668 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.440943465s)
	I1114 15:54:27.635134  876668 crio.go:451] Took 3.441078 seconds to extract the tarball
	I1114 15:54:27.635148  876668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:27.685486  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:27.742411  876668 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:54:27.742499  876668 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:54:27.742596  876668 ssh_runner.go:195] Run: crio config
	I1114 15:54:27.815555  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:27.815579  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:27.815601  876668 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:27.815624  876668 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-529430 NodeName:default-k8s-diff-port-529430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:54:27.815789  876668 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-529430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:27.815921  876668 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-529430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1114 15:54:27.815999  876668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:54:27.825716  876668 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:27.825799  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:27.838987  876668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1114 15:54:27.855187  876668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:27.872995  876668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1114 15:54:27.890455  876668 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:27.895678  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:27.909953  876668 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430 for IP: 192.168.61.196
	I1114 15:54:27.909999  876668 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:27.910204  876668 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:27.910271  876668 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:27.910463  876668 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/client.key
	I1114 15:54:27.910558  876668 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key.0d67e2f2
	I1114 15:54:27.910616  876668 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key
	I1114 15:54:27.910753  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:27.910797  876668 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:27.910811  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:27.910872  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:27.910917  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:27.910950  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:27.911007  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:27.911985  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:27.937341  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:27.963511  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:27.990011  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:54:28.016668  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:28.048528  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:28.077392  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:28.107784  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:28.136600  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:28.163995  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:28.191715  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:28.223205  876668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:28.243672  876668 ssh_runner.go:195] Run: openssl version
	I1114 15:54:28.249895  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:28.260568  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266792  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266887  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.273048  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:28.283458  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:28.294810  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300316  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300384  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.306193  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:28.319260  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:28.332843  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339044  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339120  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.346094  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:28.359711  876668 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:28.365300  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:28.372965  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:28.380378  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:28.387801  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:28.395228  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:28.401252  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:28.407435  876668 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:28.407581  876668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:28.407663  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:28.462877  876668 cri.go:89] found id: ""
	I1114 15:54:28.462962  876668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:28.473800  876668 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:28.473828  876668 kubeadm.go:636] restartCluster start
	I1114 15:54:28.473885  876668 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:28.485255  876668 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.486649  876668 kubeconfig.go:92] found "default-k8s-diff-port-529430" server: "https://192.168.61.196:8444"
	I1114 15:54:28.489408  876668 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:28.499927  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.499990  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.512175  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.512193  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.512238  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.524128  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.025143  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.025234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.040757  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.525035  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.525153  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.538214  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.174172  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:28.174207  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:28.674934  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.145414  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.145459  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.174596  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.231115  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.231157  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.674653  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.813013  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.813052  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.174424  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.183371  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:30.183427  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.675007  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.686069  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:54:30.697376  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:54:30.697472  876396 api_server.go:131] duration metric: took 9.062139934s to wait for apiserver health ...
	I1114 15:54:30.697503  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:30.697535  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:30.699476  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:25.957052  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:28.490572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:30.701025  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:30.729153  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:30.770856  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:30.785989  876396 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:30.786041  876396 system_pods.go:61] "coredns-5644d7b6d9-dxtd8" [4d22eb1f-551c-49a1-a519-7420c3774e46] Running
	I1114 15:54:30.786051  876396 system_pods.go:61] "etcd-old-k8s-version-842105" [d4d5d869-b609-4017-8cf1-071b11f69d18] Running
	I1114 15:54:30.786057  876396 system_pods.go:61] "kube-apiserver-old-k8s-version-842105" [43e84141-4938-4808-bba5-14080a0a7b9e] Running
	I1114 15:54:30.786063  876396 system_pods.go:61] "kube-controller-manager-old-k8s-version-842105" [8fca7797-f3a1-4223-a921-0819aca95ce7] Running
	I1114 15:54:30.786069  876396 system_pods.go:61] "kube-proxy-kw2ns" [c6b5fbe3-a473-4120-bc41-fb85f6d3841d] Running
	I1114 15:54:30.786074  876396 system_pods.go:61] "kube-scheduler-old-k8s-version-842105" [c9cad8bb-b7a9-44fd-92d3-d3360284c9f3] Running
	I1114 15:54:30.786082  876396 system_pods.go:61] "metrics-server-74d5856cc6-q9hc5" [1333b6de-5f3f-4937-8e73-d2b7f2c6d37e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:30.786091  876396 system_pods.go:61] "storage-provisioner" [2d95ef7e-626e-4840-9f5d-708cd8c66576] Running
	I1114 15:54:30.786107  876396 system_pods.go:74] duration metric: took 15.207693ms to wait for pod list to return data ...
	I1114 15:54:30.786125  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:30.799034  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:30.799089  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:30.799105  876396 node_conditions.go:105] duration metric: took 12.974469ms to run NodePressure ...
	I1114 15:54:30.799137  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:31.065040  876396 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:31.068697  876396 retry.go:31] will retry after 147.435912ms: kubelet not initialised
	I1114 15:54:31.225671  876396 retry.go:31] will retry after 334.031544ms: kubelet not initialised
	I1114 15:54:31.565487  876396 retry.go:31] will retry after 641.328262ms: kubelet not initialised
	I1114 15:54:32.215327  876396 retry.go:31] will retry after 1.211422414s: kubelet not initialised
	I1114 15:54:30.204276  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:30.204775  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:30.204811  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:30.204713  877583 retry.go:31] will retry after 1.909641151s: waiting for machine to come up
	I1114 15:54:32.115658  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:32.116175  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:32.116209  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:32.116116  877583 retry.go:31] will retry after 3.266336566s: waiting for machine to come up
	I1114 15:54:30.024900  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.025024  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.041104  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.524842  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.524920  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.540643  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.025166  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.040723  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.525252  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.525364  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.537978  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.024495  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.024626  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.037625  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.524934  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.525053  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.540579  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.025237  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.025366  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.037675  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.524206  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.524300  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.537100  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.025150  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.039435  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.525030  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.525140  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.541014  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.957869  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.458285  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:35.458815  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.432677  876396 retry.go:31] will retry after 864.36813ms: kubelet not initialised
	I1114 15:54:34.302450  876396 retry.go:31] will retry after 2.833071739s: kubelet not initialised
	I1114 15:54:37.142128  876396 retry.go:31] will retry after 2.880672349s: kubelet not initialised
	I1114 15:54:35.386010  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:35.386483  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:35.386526  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:35.386417  877583 retry.go:31] will retry after 3.791360608s: waiting for machine to come up
	I1114 15:54:35.024814  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.024924  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.038035  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:35.524433  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.524540  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.538065  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.024585  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.024690  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.036540  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.525201  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.525293  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.537751  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.024292  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.024388  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.037480  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.525115  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.525234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.538365  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.025002  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:38.025148  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:38.036994  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.500770  876668 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:38.500813  876668 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:38.500860  876668 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:38.500951  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:38.538468  876668 cri.go:89] found id: ""
	I1114 15:54:38.538571  876668 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:38.554809  876668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:38.563961  876668 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:38.564025  876668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572905  876668 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572930  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:38.694403  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.614869  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.815977  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.914051  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:37.956992  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.957705  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.179165  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.179746  876065 main.go:141] libmachine: (no-preload-490998) Found IP for machine: 192.168.50.251
	I1114 15:54:39.179773  876065 main.go:141] libmachine: (no-preload-490998) Reserving static IP address...
	I1114 15:54:39.179792  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has current primary IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.180259  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.180295  876065 main.go:141] libmachine: (no-preload-490998) Reserved static IP address: 192.168.50.251
	I1114 15:54:39.180328  876065 main.go:141] libmachine: (no-preload-490998) DBG | skip adding static IP to network mk-no-preload-490998 - found existing host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"}
	I1114 15:54:39.180349  876065 main.go:141] libmachine: (no-preload-490998) DBG | Getting to WaitForSSH function...
	I1114 15:54:39.180368  876065 main.go:141] libmachine: (no-preload-490998) Waiting for SSH to be available...
	I1114 15:54:39.182637  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183005  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.183037  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183157  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH client type: external
	I1114 15:54:39.183185  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa (-rw-------)
	I1114 15:54:39.183218  876065 main.go:141] libmachine: (no-preload-490998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:39.183239  876065 main.go:141] libmachine: (no-preload-490998) DBG | About to run SSH command:
	I1114 15:54:39.183251  876065 main.go:141] libmachine: (no-preload-490998) DBG | exit 0
	I1114 15:54:39.276793  876065 main.go:141] libmachine: (no-preload-490998) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:39.277095  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetConfigRaw
	I1114 15:54:39.277799  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.281002  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281360  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.281393  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281696  876065 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/config.json ...
	I1114 15:54:39.281970  876065 machine.go:88] provisioning docker machine ...
	I1114 15:54:39.281997  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:39.282236  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282395  876065 buildroot.go:166] provisioning hostname "no-preload-490998"
	I1114 15:54:39.282416  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.285099  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285498  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.285527  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285695  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.285865  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286026  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286277  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.286523  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.286978  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.287007  876065 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-490998 && echo "no-preload-490998" | sudo tee /etc/hostname
	I1114 15:54:39.419452  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-490998
	
	I1114 15:54:39.419493  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.422544  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.422912  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.422951  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.423134  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.423360  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423591  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423756  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.423915  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.424324  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.424363  876065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-490998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-490998/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-490998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:39.552044  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:39.552085  876065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:39.552106  876065 buildroot.go:174] setting up certificates
	I1114 15:54:39.552118  876065 provision.go:83] configureAuth start
	I1114 15:54:39.552127  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.552438  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.555275  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555660  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.555771  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555936  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.558628  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559004  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.559042  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559181  876065 provision.go:138] copyHostCerts
	I1114 15:54:39.559247  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:39.559273  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:39.559337  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:39.559498  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:39.559512  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:39.559547  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:39.559612  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:39.559620  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:39.559644  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:39.559697  876065 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.no-preload-490998 san=[192.168.50.251 192.168.50.251 localhost 127.0.0.1 minikube no-preload-490998]
	I1114 15:54:39.728218  876065 provision.go:172] copyRemoteCerts
	I1114 15:54:39.728286  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:39.728314  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.731482  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.731920  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.731966  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.732138  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.732376  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.732605  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.732802  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:39.819537  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:39.848716  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 15:54:39.876339  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:54:39.917428  876065 provision.go:86] duration metric: configureAuth took 365.293803ms
	I1114 15:54:39.917461  876065 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:39.917686  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:39.917783  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.920823  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921417  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.921457  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921785  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.921989  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922170  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922351  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.922516  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.922992  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.923017  876065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:40.270821  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:40.270851  876065 machine.go:91] provisioned docker machine in 988.864728ms
	I1114 15:54:40.270865  876065 start.go:300] post-start starting for "no-preload-490998" (driver="kvm2")
	I1114 15:54:40.270878  876065 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:40.270910  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.271296  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:40.271331  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.274197  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274517  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.274547  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274784  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.275045  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.275209  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.275379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.363810  876065 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:40.368485  876065 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:40.368515  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:40.368599  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:40.368688  876065 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:40.368820  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:40.378691  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:40.401789  876065 start.go:303] post-start completed in 130.90895ms
	I1114 15:54:40.401816  876065 fix.go:56] fixHost completed within 19.734039545s
	I1114 15:54:40.401848  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.404413  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404791  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.404824  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404962  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.405212  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405442  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405614  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.405840  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:40.406318  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:40.406338  876065 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:40.521875  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977280.490539427
	
	I1114 15:54:40.521907  876065 fix.go:206] guest clock: 1699977280.490539427
	I1114 15:54:40.521917  876065 fix.go:219] Guest: 2023-11-14 15:54:40.490539427 +0000 UTC Remote: 2023-11-14 15:54:40.401821935 +0000 UTC m=+361.372113130 (delta=88.717492ms)
	I1114 15:54:40.521945  876065 fix.go:190] guest clock delta is within tolerance: 88.717492ms
	I1114 15:54:40.521952  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 19.854220019s
	I1114 15:54:40.521990  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.522294  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:40.525204  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525567  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.525611  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525786  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526412  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526589  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526682  876065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:40.526727  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.526847  876065 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:40.526881  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.529470  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529673  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529863  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.529895  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530047  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530189  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.530224  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530226  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530415  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530480  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530594  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530677  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.530726  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530881  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.634647  876065 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:40.641680  876065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:40.784919  876065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:40.791364  876065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:40.791466  876065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:40.814464  876065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:40.814496  876065 start.go:472] detecting cgroup driver to use...
	I1114 15:54:40.814608  876065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:40.834599  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:40.851666  876065 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:40.851761  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:40.870359  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:40.885345  876065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:41.042220  876065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:41.174015  876065 docker.go:219] disabling docker service ...
	I1114 15:54:41.174101  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:41.188849  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:41.201322  876065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:41.329124  876065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:41.456116  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:41.477162  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:41.497860  876065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:41.497932  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.509750  876065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:41.509843  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.521944  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.532916  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.545469  876065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:41.556976  876065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:41.567322  876065 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:41.567401  876065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:41.583043  876065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:41.593941  876065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:41.717384  876065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:41.907278  876065 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:41.907351  876065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:41.912763  876065 start.go:540] Will wait 60s for crictl version
	I1114 15:54:41.912843  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:41.917105  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:41.965326  876065 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:41.965418  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.016065  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.079721  876065 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:40.028538  876396 retry.go:31] will retry after 2.943912692s: kubelet not initialised
	I1114 15:54:42.081301  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:42.084358  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.084771  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:42.084805  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.085014  876065 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:42.089551  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:42.102676  876065 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:42.102730  876065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:42.145434  876065 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:42.145479  876065 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:42.145570  876065 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.145592  876065 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.145621  876065 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.145620  876065 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.145662  876065 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1114 15:54:42.145692  876065 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.145819  876065 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.145564  876065 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.147966  876065 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.147967  876065 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.148056  876065 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1114 15:54:42.147970  876065 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.148093  876065 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.147960  876065 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.318368  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1114 15:54:42.318578  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.325647  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.340363  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.375378  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.473131  876065 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1114 15:54:42.473195  876065 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.473202  876065 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1114 15:54:42.473235  876065 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.473253  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.473283  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.511600  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.554432  876065 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1114 15:54:42.554502  876065 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1114 15:54:42.554572  876065 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.554599  876065 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1114 15:54:42.554618  876065 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.554632  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554657  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554532  876065 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.554724  876065 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1114 15:54:42.554750  876065 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.554776  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554778  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554907  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.554969  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.576922  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.577004  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.577114  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.577535  876065 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1114 15:54:42.577591  876065 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.577631  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.655186  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.655318  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1114 15:54:42.655449  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1114 15:54:42.655473  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:42.655536  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.706186  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1114 15:54:42.706257  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.706283  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1114 15:54:42.706304  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:42.706372  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:42.706408  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1114 15:54:42.706548  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:42.737003  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1114 15:54:42.737032  876065 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737093  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737102  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1114 15:54:42.737179  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1114 15:54:42.737237  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:42.769211  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1114 15:54:42.769251  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1114 15:54:42.769304  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1114 15:54:42.769289  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1114 15:54:42.769428  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:54:44.006164  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.268897316s)
	I1114 15:54:44.006206  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1114 15:54:44.006240  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.236783751s)
	I1114 15:54:44.006275  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1114 15:54:44.006283  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.269163879s)
	I1114 15:54:44.006297  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1114 15:54:44.006322  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:44.006375  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:40.016931  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:40.017030  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.030798  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.541996  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.042023  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.542537  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.042880  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.542514  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.577021  876668 api_server.go:72] duration metric: took 2.560093027s to wait for apiserver process to appear ...
	I1114 15:54:42.577059  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:42.577088  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.577767  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:42.577805  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.578225  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:43.078953  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.457425  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:44.460290  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:42.978588  876396 retry.go:31] will retry after 5.776997827s: kubelet not initialised
	I1114 15:54:46.326192  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.326231  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.326249  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.390609  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.390668  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.579140  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.590569  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:46.590606  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.079186  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.084460  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.084483  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.578774  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.588878  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.588919  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:48.079047  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:48.084809  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:54:48.098877  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:48.098941  876668 api_server.go:131] duration metric: took 5.521873886s to wait for apiserver health ...
	I1114 15:54:48.098955  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:48.098972  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:48.101010  876668 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:47.219243  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.212835904s)
	I1114 15:54:47.219281  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1114 15:54:47.219308  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:47.219472  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:48.102440  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:48.154163  876668 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:48.212336  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:48.229819  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:48.229862  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:48.229874  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:48.229886  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:48.229896  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:48.229905  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:48.229913  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:48.229923  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:48.229934  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:48.229944  876668 system_pods.go:74] duration metric: took 17.577706ms to wait for pod list to return data ...
	I1114 15:54:48.229961  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:48.236002  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:48.236043  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:48.236057  876668 node_conditions.go:105] duration metric: took 6.089691ms to run NodePressure ...
	I1114 15:54:48.236093  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:48.608191  876668 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622192  876668 kubeadm.go:787] kubelet initialised
	I1114 15:54:48.622221  876668 kubeadm.go:788] duration metric: took 13.999979ms waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622232  876668 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:48.629670  876668 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.636566  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636594  876668 pod_ready.go:81] duration metric: took 6.892422ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.636611  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636619  876668 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.643982  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644013  876668 pod_ready.go:81] duration metric: took 7.383826ms waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.644030  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644037  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.649791  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649815  876668 pod_ready.go:81] duration metric: took 5.769971ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.649825  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649833  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.655071  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655100  876668 pod_ready.go:81] duration metric: took 5.259243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.655113  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655121  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.018817  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018849  876668 pod_ready.go:81] duration metric: took 363.719341ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.018863  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018872  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.417556  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417588  876668 pod_ready.go:81] duration metric: took 398.704259ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.417600  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417607  876668 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.816654  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816692  876668 pod_ready.go:81] duration metric: took 399.075859ms waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.816712  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816721  876668 pod_ready.go:38] duration metric: took 1.194471296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:49.816765  876668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:54:49.830335  876668 ops.go:34] apiserver oom_adj: -16
	I1114 15:54:49.830363  876668 kubeadm.go:640] restartCluster took 21.356528166s
	I1114 15:54:49.830372  876668 kubeadm.go:406] StartCluster complete in 21.422955285s
	I1114 15:54:49.830390  876668 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.830502  876668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:54:49.832470  876668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.859435  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:54:49.859707  876668 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:54:49.859810  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:49.859852  876668 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859873  876668 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859885  876668 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-529430"
	I1114 15:54:49.859892  876668 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-529430"
	W1114 15:54:49.859895  876668 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:54:49.859954  876668 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859973  876668 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.859981  876668 addons.go:240] addon metrics-server should already be in state true
	I1114 15:54:49.860025  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.859956  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.860306  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860345  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860438  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860452  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860489  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860491  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.866006  876668 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-529430" context rescaled to 1 replicas
	I1114 15:54:49.866053  876668 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:54:49.878650  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I1114 15:54:49.878976  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I1114 15:54:49.879627  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I1114 15:54:49.891649  876668 out.go:177] * Verifying Kubernetes components...
	I1114 15:54:49.893450  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:54:49.892232  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892275  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892329  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.894259  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894282  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894473  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894486  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894610  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894623  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894687  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894892  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.894952  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894993  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.895598  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.895642  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.896296  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.896321  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.899095  876668 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.899120  876668 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:54:49.899151  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.899576  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.899622  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.917834  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I1114 15:54:49.917842  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I1114 15:54:49.918442  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.918505  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.919007  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919026  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919167  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919187  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919493  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919562  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919803  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.920191  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.920237  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.922764  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1114 15:54:49.922969  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.924925  876668 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:49.923380  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.926603  876668 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:49.926625  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:54:49.926647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.927991  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.928012  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.928459  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.928683  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.930696  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.930740  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.931131  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.931154  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.931330  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.931491  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.931647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.931775  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.934128  876668 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:54:49.936007  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:54:49.936031  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:54:49.936056  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.939725  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.939782  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I1114 15:54:49.940336  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.940442  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.940467  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.940822  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.941060  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.941093  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.941095  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.941211  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.941388  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.941856  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.942057  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.943639  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.943972  876668 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:49.943991  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:54:49.944009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.947172  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947631  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.947663  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947902  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.948102  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.948278  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.948579  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:46.955010  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:48.955172  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:50.066801  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:50.084526  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:54:50.084555  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:54:50.145315  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:50.145671  876668 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:50.146084  876668 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 15:54:50.151627  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:54:50.151646  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:54:50.216318  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:50.216349  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:54:50.316434  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:51.787528  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.642164298s)
	I1114 15:54:51.787644  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787672  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.787695  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.720847981s)
	I1114 15:54:51.787744  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788039  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788064  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788075  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788086  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788094  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788109  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788119  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790294  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790322  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.790327  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790349  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.803844  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.803875  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.804205  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.804238  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.804239  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.925929  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.609443677s)
	I1114 15:54:51.926001  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926019  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926385  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926429  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.926456  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926468  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926483  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926795  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926814  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926826  876668 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-529430"
	I1114 15:54:51.926829  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:52.146969  876668 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1114 15:54:48.761692  876396 retry.go:31] will retry after 7.067385779s: kubelet not initialised
	I1114 15:54:50.000157  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.780649338s)
	I1114 15:54:50.000194  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1114 15:54:50.000227  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:50.000281  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:52.291215  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (2.290903759s)
	I1114 15:54:52.291244  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1114 15:54:52.291271  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:52.291312  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:53.739008  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.447671823s)
	I1114 15:54:53.739041  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1114 15:54:53.739066  876065 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:53.739126  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:52.194351  876668 addons.go:502] enable addons completed in 2.33463136s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1114 15:54:52.220203  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:54.220773  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:50.957159  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:53.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.458026  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.834422  876396 retry.go:31] will retry after 18.847542128s: kubelet not initialised
	I1114 15:54:56.221753  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:56.720961  876668 node_ready.go:49] node "default-k8s-diff-port-529430" has status "Ready":"True"
	I1114 15:54:56.720989  876668 node_ready.go:38] duration metric: took 6.575288694s waiting for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:56.721001  876668 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:56.730382  876668 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736722  876668 pod_ready.go:92] pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:56.736761  876668 pod_ready.go:81] duration metric: took 6.345209ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736774  876668 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:58.773825  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:57.458580  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:59.956188  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:01.061681  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.322513643s)
	I1114 15:55:01.061716  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1114 15:55:01.061753  876065 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.061812  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.811277  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1114 15:55:01.811342  876065 cache_images.go:123] Successfully loaded all cached images
	I1114 15:55:01.811352  876065 cache_images.go:92] LoadImages completed in 19.665858366s
	I1114 15:55:01.811461  876065 ssh_runner.go:195] Run: crio config
	I1114 15:55:01.881576  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:01.881603  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:01.881622  876065 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:55:01.881646  876065 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-490998 NodeName:no-preload-490998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:55:01.881781  876065 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-490998"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:55:01.881859  876065 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-490998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:55:01.881918  876065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:55:01.892613  876065 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:55:01.892696  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:55:01.902267  876065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1114 15:55:01.919728  876065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:55:01.936188  876065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1114 15:55:01.954510  876065 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I1114 15:55:01.958337  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:55:01.970290  876065 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998 for IP: 192.168.50.251
	I1114 15:55:01.970328  876065 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:55:01.970513  876065 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:55:01.970563  876065 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:55:01.970662  876065 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/client.key
	I1114 15:55:01.970794  876065 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key.6b358a63
	I1114 15:55:01.970857  876065 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key
	I1114 15:55:01.971003  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:55:01.971065  876065 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:55:01.971079  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:55:01.971123  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:55:01.971160  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:55:01.971192  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:55:01.971252  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:55:01.972129  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:55:01.996012  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:55:02.020778  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:55:02.044395  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:55:02.066866  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:55:02.089331  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:55:02.113148  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:55:02.136083  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:55:02.157833  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:55:02.181150  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:55:02.203155  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:55:02.225839  876065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:55:02.243335  876065 ssh_runner.go:195] Run: openssl version
	I1114 15:55:02.249465  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:55:02.259874  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264340  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264401  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.270441  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:55:02.282031  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:55:02.293297  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298093  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298155  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.303668  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:55:02.315423  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:55:02.325976  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332124  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332194  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.339377  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:55:02.350318  876065 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:55:02.354796  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:55:02.360867  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:55:02.366306  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:55:02.372186  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:55:02.377900  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:55:02.383519  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:55:02.389128  876065 kubeadm.go:404] StartCluster: {Name:no-preload-490998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:55:02.389229  876065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:55:02.389304  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:02.428473  876065 cri.go:89] found id: ""
	I1114 15:55:02.428578  876065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:55:02.439944  876065 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:55:02.439969  876065 kubeadm.go:636] restartCluster start
	I1114 15:55:02.440079  876065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:55:02.450025  876065 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.451533  876065 kubeconfig.go:92] found "no-preload-490998" server: "https://192.168.50.251:8443"
	I1114 15:55:02.454290  876065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:55:02.463352  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.463410  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.474007  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.474025  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.474065  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.484826  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.985519  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.985595  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.998224  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.485905  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.486059  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.499281  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.985805  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.985925  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.998086  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:00.819591  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:02.773550  876668 pod_ready.go:92] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.773573  876668 pod_ready.go:81] duration metric: took 6.036790568s waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.773582  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778746  876668 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.778764  876668 pod_ready.go:81] duration metric: took 5.176465ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778772  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784332  876668 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.784353  876668 pod_ready.go:81] duration metric: took 5.572815ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784366  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789492  876668 pod_ready.go:92] pod "kube-proxy-zpchs" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.789514  876668 pod_ready.go:81] duration metric: took 5.139759ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789524  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796606  876668 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.796628  876668 pod_ready.go:81] duration metric: took 7.097079ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796639  876668 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.454894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.956449  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.485284  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.485387  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.498240  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:04.985846  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.985936  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.998901  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.485250  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.485365  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.497261  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.985411  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.985511  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.997656  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.485227  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.485332  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.497310  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.985893  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.985977  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.997585  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.485903  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.486001  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.498532  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.985881  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.985958  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.997898  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.485400  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.485512  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.497446  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.985912  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.986015  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.998121  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.081742  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:07.082515  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.580987  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:06.957307  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.455227  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.485641  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.485735  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.498347  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:09.985970  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.986073  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.997958  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.485503  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.485600  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.497407  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.985577  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.985655  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.998624  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.485146  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.485250  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.497837  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.985423  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.985551  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.997959  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:12.464381  876065 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:55:12.464449  876065 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:55:12.464478  876065 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:55:12.464582  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:12.505435  876065 cri.go:89] found id: ""
	I1114 15:55:12.505532  876065 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:55:12.522470  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:55:12.532890  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:55:12.532982  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542115  876065 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542141  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:12.684875  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:13.897464  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.21254145s)
	I1114 15:55:13.897509  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:11.582332  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.085102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:11.955438  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.455506  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.687822  876396 kubeadm.go:787] kubelet initialised
	I1114 15:55:14.687849  876396 kubeadm.go:788] duration metric: took 43.622781532s waiting for restarted kubelet to initialise ...
	I1114 15:55:14.687857  876396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:14.693560  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698796  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.698819  876396 pod_ready.go:81] duration metric: took 5.232669ms waiting for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698828  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703879  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.703903  876396 pod_ready.go:81] duration metric: took 5.067006ms waiting for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703916  876396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708064  876396 pod_ready.go:92] pod "etcd-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.708093  876396 pod_ready.go:81] duration metric: took 4.168333ms waiting for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708106  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713030  876396 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.713055  876396 pod_ready.go:81] duration metric: took 4.939899ms waiting for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713067  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087824  876396 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.087857  876396 pod_ready.go:81] duration metric: took 374.780312ms waiting for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087873  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.486984  876396 pod_ready.go:92] pod "kube-proxy-kw2ns" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.487011  876396 pod_ready.go:81] duration metric: took 399.130772ms waiting for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.487020  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886624  876396 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.886658  876396 pod_ready.go:81] duration metric: took 399.628757ms waiting for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886671  876396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.096314  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.174495  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.254647  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:55:14.254765  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.273596  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.788350  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.288506  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.788580  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.288476  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.787853  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.816380  876065 api_server.go:72] duration metric: took 2.561735945s to wait for apiserver process to appear ...
	I1114 15:55:16.816408  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:55:16.816428  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:16.582309  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:18.584599  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:16.957605  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:19.457613  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.541438  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.541473  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:20.541490  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:20.582790  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.582838  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:21.083891  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.089625  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.089658  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:21.583184  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.599539  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.599576  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:22.083098  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:22.088480  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 15:55:22.096517  876065 api_server.go:141] control plane version: v1.28.3
	I1114 15:55:22.096545  876065 api_server.go:131] duration metric: took 5.280130119s to wait for apiserver health ...
	I1114 15:55:22.096558  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:22.096568  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:22.098612  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:55:18.194723  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.195126  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.196472  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.100184  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:55:22.125049  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:55:22.150893  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:55:22.163922  876065 system_pods.go:59] 8 kube-system pods found
	I1114 15:55:22.163958  876065 system_pods.go:61] "coredns-5dd5756b68-n77fz" [e2f5ce73-a65e-40da-b554-c929f093a1a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:55:22.163970  876065 system_pods.go:61] "etcd-no-preload-490998" [01e272b5-4463-431d-8ed1-f561a90b667d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:55:22.163983  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [529f79fd-eae5-44e9-971d-b3ecb5ed025d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:55:22.163989  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ea299234-2456-4171-bac0-8e8ff4998596] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:55:22.163994  876065 system_pods.go:61] "kube-proxy-6hqk5" [7233dd72-138c-4148-834b-2dcb83a4cf00] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:55:22.163999  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [666e8a03-50b1-4b08-84f3-c3c6ec8a5452] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:55:22.164005  876065 system_pods.go:61] "metrics-server-57f55c9bc5-6lg6h" [7afa1e38-c64c-4d03-9b00-5765e7e251ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:55:22.164036  876065 system_pods.go:61] "storage-provisioner" [1090ed8a-6424-4980-9ea7-b43e998d1eb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:55:22.164050  876065 system_pods.go:74] duration metric: took 13.132475ms to wait for pod list to return data ...
	I1114 15:55:22.164058  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:55:22.167930  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:55:22.168020  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 15:55:22.168033  876065 node_conditions.go:105] duration metric: took 3.969303ms to run NodePressure ...
	I1114 15:55:22.168055  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:22.456975  876065 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470174  876065 kubeadm.go:787] kubelet initialised
	I1114 15:55:22.470202  876065 kubeadm.go:788] duration metric: took 13.201285ms waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470216  876065 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:22.483150  876065 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:21.081628  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:23.083015  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:21.955808  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.455829  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.696004  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.195514  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.514847  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.519442  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.013526  876065 pod_ready.go:92] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:27.013584  876065 pod_ready.go:81] duration metric: took 4.530407487s waiting for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:27.013600  876065 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:29.032979  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:25.582366  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.080716  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.456123  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.955087  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:29.694646  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.194401  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:31.033810  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:33.033026  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.033058  876065 pod_ready.go:81] duration metric: took 6.019448696s waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.033071  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039148  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.039180  876065 pod_ready.go:81] duration metric: took 6.099138ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039194  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049651  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.049675  876065 pod_ready.go:81] duration metric: took 10.473938ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049685  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061928  876065 pod_ready.go:92] pod "kube-proxy-6hqk5" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.061971  876065 pod_ready.go:81] duration metric: took 12.277038ms waiting for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061984  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071422  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.071452  876065 pod_ready.go:81] duration metric: took 9.456301ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071465  876065 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:30.081625  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.082675  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.581547  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:30.955154  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.957772  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.454775  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.194959  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:36.195495  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.339391  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.340404  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.083295  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.584210  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.956659  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:38.696669  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.194485  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.838699  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.840605  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.081223  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.081468  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.454630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.455871  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:43.195172  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:45.195687  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.339878  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.838910  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.841677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.082382  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.582248  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.457525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.955133  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:47.695467  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.195263  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.339284  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.340315  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.082546  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.581238  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.955630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.454502  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.455395  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:52.694030  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:54.694593  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:56.695136  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.838685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.838864  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.581986  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.582037  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.582635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.955377  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.963166  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:01.195573  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.840578  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.338828  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.082323  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.582531  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.454214  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.454975  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:03.198457  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:05.694675  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.339632  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.340001  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.840358  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:07.082081  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:09.582483  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.455257  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.455373  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.457344  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.196641  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.693989  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.339845  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:13.839805  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.583615  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:14.083682  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.957092  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.456347  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.694792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.200049  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.339768  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:18.839853  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.583278  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:19.081994  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.954665  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.454724  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.697859  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.194201  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.194738  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.840457  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.339880  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:21.082759  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.581646  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.457299  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.954029  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.694448  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.696563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:25.342126  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:27.839304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.083724  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:28.582086  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.955572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.459642  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.194785  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.693765  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:30.339130  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:32.339361  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.083363  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.582213  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.955312  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.955576  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.694783  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:34.339538  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.839469  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.842444  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.081206  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.581263  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.457091  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.956262  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.195134  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:40.195875  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.343304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.839634  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.080021  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.081543  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.453768  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.455182  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.457368  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:42.694667  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.195018  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.197081  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:46.338815  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:48.339683  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.083139  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.582320  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.954718  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.455135  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:49.696028  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.194484  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.340708  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.845026  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.082635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.583485  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.455840  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.955079  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.194627  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.197158  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.338956  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.339983  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.081903  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.583102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.955380  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.956134  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.695165  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.196563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:59.340299  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.838688  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.839025  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:00.080983  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:02.582197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:04.583222  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.454473  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.455187  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.455628  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.694518  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.695324  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.839239  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.341567  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.081736  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.581889  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.954781  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.954913  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.194118  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.194688  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:12.195198  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.840317  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.582436  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.583580  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.955894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.459525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.195588  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.195922  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:15.339470  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:17.340059  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.081770  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.082006  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.954957  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.455211  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.695530  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.193801  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.839618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.839819  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:20.083348  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:22.581010  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.582114  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.958579  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.454848  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:23.196520  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:25.196779  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.339942  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.340928  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.841122  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.583453  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:29.082667  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.455784  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.954086  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:27.695279  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.194416  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.341608  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.343898  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.581417  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.583852  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.955148  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.455525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:32.693640  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:34.695191  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.194999  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.838294  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.838948  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:36.082181  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.582488  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.955108  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.454392  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:40.455291  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.195193  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.694849  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.839180  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.339359  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.081697  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:43.081876  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.455905  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.962584  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.194494  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:46.195239  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.840607  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.338846  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:45.582002  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.083197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.454539  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.455025  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.694661  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.695232  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.840392  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.580410  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.580961  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.581502  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:51.954903  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.454053  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:53.194450  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:55.196537  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.339997  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.839677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.080798  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.087078  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.454639  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:58.955200  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.696210  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:00.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:02.194961  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.339152  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.340037  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.838551  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.582808  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.084331  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.458365  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.955679  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.696770  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:07.195364  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:05.840151  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.340709  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.582153  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.083260  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.454599  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.458281  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.196674  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.696022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.839588  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.342479  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.583479  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:14.081451  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.954623  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.455233  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.147383  876220 pod_ready.go:81] duration metric: took 4m0.000589332s waiting for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	E1114 15:58:15.147416  876220 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:58:15.147446  876220 pod_ready.go:38] duration metric: took 4m11.626263996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:15.147477  876220 kubeadm.go:640] restartCluster took 4m32.524775831s
	W1114 15:58:15.147587  876220 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:58:15.147630  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:58:14.195824  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.696055  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.841115  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.341347  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.084839  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.582575  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.696792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:20.838749  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:22.840049  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.080598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.081173  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.694974  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:26.196317  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.340015  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.839312  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.081700  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.582564  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.582728  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.037182  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.889530708s)
	I1114 15:58:29.037253  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:29.052797  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:58:29.061624  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:58:29.070799  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:58:29.070848  876220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:58:29.303905  876220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:58:28.695122  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.696046  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.341383  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:32.341988  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:31.584191  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.082795  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:33.195568  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:35.695145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.839094  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.840873  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.086791  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:38.581233  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.234828  876220 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:58:40.234881  876220 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:58:40.234965  876220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:58:40.235127  876220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:58:40.235264  876220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:58:40.235361  876220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:58:40.237159  876220 out.go:204]   - Generating certificates and keys ...
	I1114 15:58:40.237276  876220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:58:40.237366  876220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:58:40.237511  876220 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:58:40.237608  876220 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:58:40.237697  876220 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:58:40.237791  876220 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:58:40.237883  876220 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:58:40.237975  876220 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:58:40.238066  876220 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:58:40.238161  876220 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:58:40.238213  876220 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:58:40.238283  876220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:58:40.238352  876220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:58:40.238422  876220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:58:40.238506  876220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:58:40.238582  876220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:58:40.238725  876220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:58:40.238816  876220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:58:40.240266  876220 out.go:204]   - Booting up control plane ...
	I1114 15:58:40.240404  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:58:40.240508  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:58:40.240593  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:58:40.240822  876220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:58:40.240958  876220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:58:40.241018  876220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:58:40.241226  876220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:58:40.241333  876220 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.509675 seconds
	I1114 15:58:40.241470  876220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:58:40.241658  876220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:58:40.241744  876220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:58:40.241979  876220 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-279880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:58:40.242054  876220 kubeadm.go:322] [bootstrap-token] Using token: 2hujph.0fcw82xd7gxidhsk
	I1114 15:58:40.243677  876220 out.go:204]   - Configuring RBAC rules ...
	I1114 15:58:40.243823  876220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:58:40.243942  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:58:40.244131  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:58:40.244252  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:58:40.244351  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:58:40.244464  876220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:58:40.244616  876220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:58:40.244673  876220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:58:40.244732  876220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:58:40.244762  876220 kubeadm.go:322] 
	I1114 15:58:40.244828  876220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:58:40.244835  876220 kubeadm.go:322] 
	I1114 15:58:40.244904  876220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:58:40.244913  876220 kubeadm.go:322] 
	I1114 15:58:40.244934  876220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:58:40.244982  876220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:58:40.245027  876220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:58:40.245033  876220 kubeadm.go:322] 
	I1114 15:58:40.245108  876220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:58:40.245128  876220 kubeadm.go:322] 
	I1114 15:58:40.245185  876220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:58:40.245195  876220 kubeadm.go:322] 
	I1114 15:58:40.245269  876220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:58:40.245376  876220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:58:40.245483  876220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:58:40.245493  876220 kubeadm.go:322] 
	I1114 15:58:40.245606  876220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:58:40.245700  876220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:58:40.245708  876220 kubeadm.go:322] 
	I1114 15:58:40.245828  876220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.245986  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:58:40.246023  876220 kubeadm.go:322] 	--control-plane 
	I1114 15:58:40.246036  876220 kubeadm.go:322] 
	I1114 15:58:40.246148  876220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:58:40.246158  876220 kubeadm.go:322] 
	I1114 15:58:40.246247  876220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.246364  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:58:40.246386  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:58:40.246394  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:58:40.248160  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:58:40.249669  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:58:40.299570  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:58:40.399662  876220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:58:40.399751  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=embed-certs-279880 minikube.k8s.io/updated_at=2023_11_14T15_58_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.399759  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.456044  876220 ops.go:34] apiserver oom_adj: -16
	I1114 15:58:40.674206  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.780887  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:37.695540  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.195681  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:39.338902  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.339264  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.339844  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.582722  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.082401  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.391744  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:41.892060  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.392311  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.892385  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.391523  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.892286  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.392103  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.891494  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:45.392324  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.695415  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.195275  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.842259  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.339758  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.582481  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.079990  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.891330  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.391723  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.892283  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.391436  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.891664  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.392116  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.892052  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.391957  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.892316  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:50.391756  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.696088  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.195252  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.195695  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.891614  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.391818  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.891371  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.391565  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.544346  876220 kubeadm.go:1081] duration metric: took 12.144659895s to wait for elevateKubeSystemPrivileges.
	I1114 15:58:52.544391  876220 kubeadm.go:406] StartCluster complete in 5m9.978264522s
	I1114 15:58:52.544428  876220 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.544541  876220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:58:52.547345  876220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.547635  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:58:52.547785  876220 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:58:52.547873  876220 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-279880"
	I1114 15:58:52.547886  876220 addons.go:69] Setting default-storageclass=true in profile "embed-certs-279880"
	I1114 15:58:52.547903  876220 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-279880"
	I1114 15:58:52.547907  876220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-279880"
	W1114 15:58:52.547915  876220 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:58:52.547951  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:58:52.547986  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548010  876220 addons.go:69] Setting metrics-server=true in profile "embed-certs-279880"
	I1114 15:58:52.548027  876220 addons.go:231] Setting addon metrics-server=true in "embed-certs-279880"
	W1114 15:58:52.548038  876220 addons.go:240] addon metrics-server should already be in state true
	I1114 15:58:52.548083  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548508  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548612  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548844  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.568396  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I1114 15:58:52.568429  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I1114 15:58:52.568402  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I1114 15:58:52.569005  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569019  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569009  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569581  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569612  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.569772  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569796  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.570042  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570183  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.570699  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.570718  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.570742  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.570723  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.571364  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.571943  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.571975  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.575936  876220 addons.go:231] Setting addon default-storageclass=true in "embed-certs-279880"
	W1114 15:58:52.575961  876220 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:58:52.575996  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.576368  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.576412  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.588007  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I1114 15:58:52.588767  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.589487  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.589505  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.589943  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.590164  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.591841  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I1114 15:58:52.592269  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.592610  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.594453  876220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:58:52.593100  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.594839  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I1114 15:58:52.595836  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:58:52.595856  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:58:52.595874  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.595879  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.596356  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.596654  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.596683  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.597179  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.597199  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.597596  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.598225  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.598250  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.598972  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599389  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.599412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599655  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.599823  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.599971  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.600085  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.601301  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.603202  876220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:58:52.604691  876220 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.604701  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:58:52.604714  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.607585  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.607911  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.607942  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.608138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.608309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.608450  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.608586  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.614716  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I1114 15:58:52.615047  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.615462  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.615503  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.615849  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.616006  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.617386  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.617630  876220 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.617647  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:58:52.617666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.620337  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620656  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.620700  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620951  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.621103  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.621252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.621374  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.636800  876220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-279880" context rescaled to 1 replicas
	I1114 15:58:52.636844  876220 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:58:52.638665  876220 out.go:177] * Verifying Kubernetes components...
	I1114 15:58:50.340524  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.341233  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.080611  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.081851  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.582577  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.640094  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:52.829938  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.840140  876220 node_ready.go:35] waiting up to 6m0s for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.840653  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:58:52.858164  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.877415  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:58:52.877448  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:58:52.900588  876220 node_ready.go:49] node "embed-certs-279880" has status "Ready":"True"
	I1114 15:58:52.900614  876220 node_ready.go:38] duration metric: took 60.432125ms waiting for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.900624  876220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:52.972955  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:53.009532  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:58:53.009564  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:58:53.064247  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:53.064283  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:58:53.168472  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:54.543952  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.713966912s)
	I1114 15:58:54.544016  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544029  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544312  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544332  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.544343  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544374  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544650  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544697  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.569577  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.728879408s)
	I1114 15:58:54.569603  876220 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 15:58:54.572090  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.572118  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.572396  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.572420  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063126  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.20491351s)
	I1114 15:58:55.063197  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063218  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063551  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063572  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063583  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063596  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063609  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.063888  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063910  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.228754  876220 pod_ready.go:102] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:55.671980  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.503435235s)
	I1114 15:58:55.672050  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672066  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672415  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672481  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672502  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672514  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.672777  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672795  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672807  876220 addons.go:467] Verifying addon metrics-server=true in "embed-certs-279880"
	I1114 15:58:55.674712  876220 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:58:55.676182  876220 addons.go:502] enable addons completed in 3.128402943s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:58:54.695084  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.696106  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.844023  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:57.338618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.660605  876220 pod_ready.go:92] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.660642  876220 pod_ready.go:81] duration metric: took 3.687643856s waiting for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.660659  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671773  876220 pod_ready.go:92] pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.671803  876220 pod_ready.go:81] duration metric: took 11.134131ms waiting for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671817  876220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679179  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.679212  876220 pod_ready.go:81] duration metric: took 7.385218ms waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679224  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691696  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.691721  876220 pod_ready.go:81] duration metric: took 12.488161ms waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691734  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704134  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.704153  876220 pod_ready.go:81] duration metric: took 12.411686ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704161  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950181  876220 pod_ready.go:92] pod "kube-proxy-qdppd" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:57.950213  876220 pod_ready.go:81] duration metric: took 1.246044532s waiting for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950226  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237122  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:58.237150  876220 pod_ready.go:81] duration metric: took 286.915812ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237158  876220 pod_ready.go:38] duration metric: took 5.336525686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:58.237177  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:58:58.237227  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:58:58.260115  876220 api_server.go:72] duration metric: took 5.623228202s to wait for apiserver process to appear ...
	I1114 15:58:58.260147  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:58:58.260169  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:58:58.265361  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:58:58.266889  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:58:58.266918  876220 api_server.go:131] duration metric: took 6.76351ms to wait for apiserver health ...
	I1114 15:58:58.266938  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:58:58.439329  876220 system_pods.go:59] 9 kube-system pods found
	I1114 15:58:58.439362  876220 system_pods.go:61] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.439367  876220 system_pods.go:61] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.439371  876220 system_pods.go:61] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.439375  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.439379  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.439383  876220 system_pods.go:61] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.439387  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.439395  876220 system_pods.go:61] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.439402  876220 system_pods.go:61] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.439412  876220 system_pods.go:74] duration metric: took 172.465662ms to wait for pod list to return data ...
	I1114 15:58:58.439426  876220 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:58:58.637240  876220 default_sa.go:45] found service account: "default"
	I1114 15:58:58.637269  876220 default_sa.go:55] duration metric: took 197.834816ms for default service account to be created ...
	I1114 15:58:58.637278  876220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:58:58.840945  876220 system_pods.go:86] 9 kube-system pods found
	I1114 15:58:58.840976  876220 system_pods.go:89] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.840984  876220 system_pods.go:89] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.840990  876220 system_pods.go:89] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.840996  876220 system_pods.go:89] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.841001  876220 system_pods.go:89] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.841008  876220 system_pods.go:89] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.841014  876220 system_pods.go:89] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.841024  876220 system_pods.go:89] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.841032  876220 system_pods.go:89] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.841046  876220 system_pods.go:126] duration metric: took 203.761925ms to wait for k8s-apps to be running ...
	I1114 15:58:58.841058  876220 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:58:58.841143  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:58.857376  876220 system_svc.go:56] duration metric: took 16.307402ms WaitForService to wait for kubelet.
	I1114 15:58:58.857414  876220 kubeadm.go:581] duration metric: took 6.220529321s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:58:58.857439  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:58:59.036083  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:58:59.036112  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:58:59.036123  876220 node_conditions.go:105] duration metric: took 178.67985ms to run NodePressure ...
	I1114 15:58:59.036136  876220 start.go:228] waiting for startup goroutines ...
	I1114 15:58:59.036142  876220 start.go:233] waiting for cluster config update ...
	I1114 15:58:59.036152  876220 start.go:242] writing updated cluster config ...
	I1114 15:58:59.036464  876220 ssh_runner.go:195] Run: rm -f paused
	I1114 15:58:59.092065  876220 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:58:59.093827  876220 out.go:177] * Done! kubectl is now configured to use "embed-certs-279880" cluster and "default" namespace by default
	I1114 15:58:57.082065  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.082525  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:58.696271  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.195205  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.339863  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.839918  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.582598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:02.796920  876668 pod_ready.go:81] duration metric: took 4m0.000259164s waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:02.796965  876668 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:02.796978  876668 pod_ready.go:38] duration metric: took 4m6.075965552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:02.796999  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:02.797042  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:02.797123  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:02.851170  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:02.851199  876668 cri.go:89] found id: ""
	I1114 15:59:02.851210  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:02.851271  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.857251  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:02.857323  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:02.904914  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:02.904939  876668 cri.go:89] found id: ""
	I1114 15:59:02.904947  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:02.904994  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.909276  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:02.909350  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:02.944708  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:02.944778  876668 cri.go:89] found id: ""
	I1114 15:59:02.944789  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:02.944856  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.949260  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:02.949334  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:02.986830  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:02.986858  876668 cri.go:89] found id: ""
	I1114 15:59:02.986868  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:02.986928  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.991432  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:02.991511  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:03.028072  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.028101  876668 cri.go:89] found id: ""
	I1114 15:59:03.028113  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:03.028177  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.032678  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:03.032771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:03.070651  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.070671  876668 cri.go:89] found id: ""
	I1114 15:59:03.070679  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:03.070727  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.075127  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:03.075192  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:03.117191  876668 cri.go:89] found id: ""
	I1114 15:59:03.117221  876668 logs.go:284] 0 containers: []
	W1114 15:59:03.117229  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:03.117235  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:03.117300  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:03.163227  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.163255  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:03.163260  876668 cri.go:89] found id: ""
	I1114 15:59:03.163269  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:03.163322  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.167410  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.171362  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:03.171389  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:03.330078  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:03.330113  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:03.372318  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:03.372349  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.414474  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:03.414506  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.471989  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:03.472025  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.516802  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:03.516834  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:03.532186  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:03.532218  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:03.987984  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:03.988029  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:04.045261  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:04.045305  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:04.095816  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:04.095853  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:04.148084  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:04.148132  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:04.200992  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:04.201039  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:04.239171  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:04.239207  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:03.695077  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.194941  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:04.339648  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.839045  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:08.841546  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.787847  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:06.808020  876668 api_server.go:72] duration metric: took 4m16.941929205s to wait for apiserver process to appear ...
	I1114 15:59:06.808052  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:06.808087  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:06.808146  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:06.849716  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:06.849747  876668 cri.go:89] found id: ""
	I1114 15:59:06.849758  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:06.849816  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.854025  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:06.854093  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:06.894331  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:06.894361  876668 cri.go:89] found id: ""
	I1114 15:59:06.894371  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:06.894430  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.899047  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:06.899137  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:06.947156  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:06.947194  876668 cri.go:89] found id: ""
	I1114 15:59:06.947206  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:06.947279  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.952972  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:06.953045  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:06.997872  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:06.997899  876668 cri.go:89] found id: ""
	I1114 15:59:06.997910  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:06.997972  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.002282  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:07.002362  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:07.041689  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.041722  876668 cri.go:89] found id: ""
	I1114 15:59:07.041734  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:07.041800  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.045730  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:07.045797  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:07.091996  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.092021  876668 cri.go:89] found id: ""
	I1114 15:59:07.092032  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:07.092094  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.100690  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:07.100771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:07.141635  876668 cri.go:89] found id: ""
	I1114 15:59:07.141670  876668 logs.go:284] 0 containers: []
	W1114 15:59:07.141681  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:07.141689  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:07.141750  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:07.184807  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:07.184839  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.184847  876668 cri.go:89] found id: ""
	I1114 15:59:07.184857  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:07.184920  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.189361  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.197666  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:07.197694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:07.243532  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:07.243568  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:07.284479  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:07.284520  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.326309  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:07.326341  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:07.794035  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:07.794077  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.836008  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:07.836050  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:07.886157  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:07.886192  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:07.930752  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:07.930795  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.983727  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:07.983765  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:08.024969  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:08.025000  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:08.079050  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:08.079090  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:08.093653  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:08.093691  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:08.228823  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:08.228864  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:08.196022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.196145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:12.196843  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:11.340269  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:13.840055  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.780836  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:59:10.793555  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:59:10.794839  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:59:10.794868  876668 api_server.go:131] duration metric: took 3.986808086s to wait for apiserver health ...
	I1114 15:59:10.794878  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:10.794907  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:10.794989  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:10.842028  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:10.842050  876668 cri.go:89] found id: ""
	I1114 15:59:10.842059  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:10.842113  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.846938  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:10.847030  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:10.893360  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:10.893386  876668 cri.go:89] found id: ""
	I1114 15:59:10.893394  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:10.893443  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.899601  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:10.899669  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:10.949519  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:10.949542  876668 cri.go:89] found id: ""
	I1114 15:59:10.949550  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:10.949602  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.953875  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:10.953936  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:10.994565  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:10.994595  876668 cri.go:89] found id: ""
	I1114 15:59:10.994605  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:10.994659  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.999120  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:10.999187  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:11.039364  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.039392  876668 cri.go:89] found id: ""
	I1114 15:59:11.039403  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:11.039509  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.044115  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:11.044174  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:11.088803  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.088835  876668 cri.go:89] found id: ""
	I1114 15:59:11.088846  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:11.088917  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.094005  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:11.094076  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:11.145247  876668 cri.go:89] found id: ""
	I1114 15:59:11.145276  876668 logs.go:284] 0 containers: []
	W1114 15:59:11.145285  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:11.145294  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:11.145355  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:11.188916  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.188950  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.188957  876668 cri.go:89] found id: ""
	I1114 15:59:11.188967  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:11.189029  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.195578  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.200146  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:11.200174  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:11.240413  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:11.240458  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.290614  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:11.290648  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:11.638700  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:11.638743  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:11.654234  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:11.654267  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.709147  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:11.709184  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:11.751661  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:11.751701  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.796993  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:11.797041  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.841478  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:11.841510  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:11.972862  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:11.972902  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:12.019217  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:12.019260  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:12.073396  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:12.073443  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:12.142653  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:12.142694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:14.704129  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:59:14.704159  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.704167  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.704173  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.704179  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.704184  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.704191  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.704200  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.704207  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.704217  876668 system_pods.go:74] duration metric: took 3.909331461s to wait for pod list to return data ...
	I1114 15:59:14.704231  876668 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:14.706920  876668 default_sa.go:45] found service account: "default"
	I1114 15:59:14.706944  876668 default_sa.go:55] duration metric: took 2.702527ms for default service account to be created ...
	I1114 15:59:14.706954  876668 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:14.714049  876668 system_pods.go:86] 8 kube-system pods found
	I1114 15:59:14.714080  876668 system_pods.go:89] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.714089  876668 system_pods.go:89] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.714096  876668 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.714101  876668 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.714106  876668 system_pods.go:89] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.714113  876668 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.714128  876668 system_pods.go:89] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.714142  876668 system_pods.go:89] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.714152  876668 system_pods.go:126] duration metric: took 7.191238ms to wait for k8s-apps to be running ...
	I1114 15:59:14.714174  876668 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:59:14.714231  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:14.734987  876668 system_svc.go:56] duration metric: took 20.804278ms WaitForService to wait for kubelet.
	I1114 15:59:14.735015  876668 kubeadm.go:581] duration metric: took 4m24.868931304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:59:14.735038  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:59:14.737844  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:59:14.737868  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:59:14.737878  876668 node_conditions.go:105] duration metric: took 2.834918ms to run NodePressure ...
	I1114 15:59:14.737889  876668 start.go:228] waiting for startup goroutines ...
	I1114 15:59:14.737895  876668 start.go:233] waiting for cluster config update ...
	I1114 15:59:14.737905  876668 start.go:242] writing updated cluster config ...
	I1114 15:59:14.738157  876668 ssh_runner.go:195] Run: rm -f paused
	I1114 15:59:14.791076  876668 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:59:14.793853  876668 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-529430" cluster and "default" namespace by default
	I1114 15:59:14.694842  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:15.887599  876396 pod_ready.go:81] duration metric: took 4m0.000892827s waiting for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:15.887641  876396 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:15.887664  876396 pod_ready.go:38] duration metric: took 4m1.199797165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:15.887694  876396 kubeadm.go:640] restartCluster took 5m7.501574769s
	W1114 15:59:15.887782  876396 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:15.887859  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:16.340114  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:18.340157  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:20.901839  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.013944828s)
	I1114 15:59:20.901933  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:20.915929  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:20.928081  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:20.937656  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:20.937756  876396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1114 15:59:20.998439  876396 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1114 15:59:20.998593  876396 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:21.145429  876396 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:21.145639  876396 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:21.145777  876396 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:21.387825  876396 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:21.388897  876396 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:21.396490  876396 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1114 15:59:21.518176  876396 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:21.520261  876396 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:21.520398  876396 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:21.520496  876396 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:21.520590  876396 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:21.520686  876396 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:21.520797  876396 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:21.520918  876396 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:21.521009  876396 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:21.521434  876396 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:21.521822  876396 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:21.522333  876396 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:21.522651  876396 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:21.522730  876396 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:21.707438  876396 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:21.890929  876396 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:22.058077  876396 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:22.234616  876396 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:22.235636  876396 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:22.237626  876396 out.go:204]   - Booting up control plane ...
	I1114 15:59:22.237743  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:22.241964  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:22.242976  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:22.244745  876396 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:22.248349  876396 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:20.341685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:22.838566  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:25.337887  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:27.341368  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.256998  876396 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005833 seconds
	I1114 15:59:32.257145  876396 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:32.272061  876396 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:32.797161  876396 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:32.797367  876396 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-842105 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 15:59:33.314721  876396 kubeadm.go:322] [bootstrap-token] Using token: 04dlot.9kpu87sb3ajm8dfs
	I1114 15:59:33.316454  876396 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:33.316628  876396 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:33.324455  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:33.328877  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:33.335460  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:33.339307  876396 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:33.422742  876396 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:33.757796  876396 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:33.759150  876396 kubeadm.go:322] 
	I1114 15:59:33.759248  876396 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:33.759281  876396 kubeadm.go:322] 
	I1114 15:59:33.759442  876396 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:33.759459  876396 kubeadm.go:322] 
	I1114 15:59:33.759495  876396 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:33.759577  876396 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:33.759647  876396 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:33.759657  876396 kubeadm.go:322] 
	I1114 15:59:33.759726  876396 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:33.759828  876396 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:33.759922  876396 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:33.759931  876396 kubeadm.go:322] 
	I1114 15:59:33.760050  876396 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1114 15:59:33.760143  876396 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:33.760154  876396 kubeadm.go:322] 
	I1114 15:59:33.760239  876396 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760360  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:33.760397  876396 kubeadm.go:322]     --control-plane 	  
	I1114 15:59:33.760408  876396 kubeadm.go:322] 
	I1114 15:59:33.760517  876396 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:33.760527  876396 kubeadm.go:322] 
	I1114 15:59:33.760624  876396 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760781  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:33.764918  876396 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:33.764993  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:59:33.765010  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:33.767708  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:29.839580  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.339612  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:33.072424  876065 pod_ready.go:81] duration metric: took 4m0.000921839s waiting for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:33.072553  876065 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:33.072606  876065 pod_ready.go:38] duration metric: took 4m10.602378093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:33.072664  876065 kubeadm.go:640] restartCluster took 4m30.632686786s
	W1114 15:59:33.072782  876065 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:33.073057  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:33.769398  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:33.781327  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:33.810672  876396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:33.810839  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:33.810927  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=old-k8s-version-842105 minikube.k8s.io/updated_at=2023_11_14T15_59_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.181391  876396 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:34.181528  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.301381  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.919870  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.419262  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.919637  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.419780  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.919453  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.420046  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.919605  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.419845  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.919474  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.419303  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.919616  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.419633  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.919220  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.419298  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.919396  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.420042  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.919886  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.419274  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.920217  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.419952  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.919511  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.419619  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.919762  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.420141  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.919676  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.261922  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.188828866s)
	I1114 15:59:47.262031  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:47.276268  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:47.285701  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:47.294481  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:47.294540  876065 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:59:47.348856  876065 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:59:47.348959  876065 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:47.530233  876065 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:47.530413  876065 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:47.530581  876065 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:47.784516  876065 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:47.420108  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.920005  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.419707  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.527158  876396 kubeadm.go:1081] duration metric: took 14.716377346s to wait for elevateKubeSystemPrivileges.
	I1114 15:59:48.527193  876396 kubeadm.go:406] StartCluster complete in 5m40.211957984s
	I1114 15:59:48.527213  876396 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.527323  876396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:59:48.529723  876396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.530058  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:59:48.530134  876396 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:59:48.530222  876396 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530248  876396 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-842105"
	W1114 15:59:48.530257  876396 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:59:48.530256  876396 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530285  876396 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-842105"
	W1114 15:59:48.530297  876396 addons.go:240] addon metrics-server should already be in state true
	I1114 15:59:48.530321  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530342  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530354  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:59:48.530429  876396 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530457  876396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-842105"
	I1114 15:59:48.530764  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530793  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530805  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530795  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530818  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530822  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.549568  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1114 15:59:48.549642  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I1114 15:59:48.550081  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550240  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550734  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550755  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.550866  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550887  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.551164  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551425  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551622  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.551766  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.551813  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.552539  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1114 15:59:48.553028  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.554044  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.554063  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.554522  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.555069  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555106  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.555404  876396 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-842105"
	W1114 15:59:48.555470  876396 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:59:48.555516  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.555924  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555961  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.576876  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I1114 15:59:48.576912  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I1114 15:59:48.576878  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1114 15:59:48.577223  876396 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-842105" context rescaled to 1 replicas
	I1114 15:59:48.577266  876396 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:59:48.579711  876396 out.go:177] * Verifying Kubernetes components...
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577672  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.581751  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:48.580402  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581791  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580422  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581852  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580432  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581919  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.582238  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582286  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582314  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582439  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.582735  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.582751  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.583264  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.584865  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.586792  876396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:59:48.585415  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.588364  876396 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.588378  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:59:48.588398  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.592854  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.594307  876396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:59:47.786524  876065 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:47.786668  876065 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:47.786744  876065 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:47.786843  876065 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:47.786912  876065 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:47.787108  876065 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:47.787698  876065 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:47.788301  876065 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:47.788930  876065 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:47.789533  876065 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:47.790115  876065 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:47.790449  876065 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:47.790523  876065 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:47.975724  876065 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:48.056071  876065 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:48.340177  876065 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:48.733230  876065 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:48.734350  876065 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:48.738369  876065 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:48.740013  876065 out.go:204]   - Booting up control plane ...
	I1114 15:59:48.740143  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:48.740271  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:48.743856  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:48.763450  876065 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:48.764688  876065 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:48.764768  876065 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:59:48.932286  876065 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:48.592918  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.593079  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.595739  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:59:48.595754  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:59:48.595776  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.595826  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.595852  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.596957  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.597212  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.599011  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599448  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.599710  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599755  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.599975  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.600142  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.600304  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.607351  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I1114 15:59:48.607929  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.608484  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.608509  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.608998  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.609237  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.610958  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.611196  876396 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.611210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:59:48.611228  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.613709  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614297  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.614322  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614366  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.614539  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.614631  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.614711  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.708399  876396 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.708481  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:59:48.715087  876396 node_ready.go:49] node "old-k8s-version-842105" has status "Ready":"True"
	I1114 15:59:48.715111  876396 node_ready.go:38] duration metric: took 6.675707ms waiting for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.715124  876396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718748  876396 pod_ready.go:38] duration metric: took 3.605786ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718790  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:48.718857  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:48.750191  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.773186  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:59:48.773210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:59:48.788782  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.847057  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:59:48.847090  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:59:48.905401  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:48.905442  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:59:48.986582  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:49.606449  876396 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1114 15:59:49.606451  876396 api_server.go:72] duration metric: took 1.029145444s to wait for apiserver process to appear ...
	I1114 15:59:49.606506  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:49.606530  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:59:49.709702  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.709732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.710100  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.710130  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.710144  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.710153  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.711953  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.711985  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.711994  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.755976  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:59:49.756696  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.756719  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.757036  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.757103  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.757121  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.757390  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:59:49.757410  876396 api_server.go:131] duration metric: took 150.89717ms to wait for apiserver health ...
	I1114 15:59:49.757447  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:49.763460  876396 system_pods.go:59] 2 kube-system pods found
	I1114 15:59:49.763487  876396 system_pods.go:61] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.763497  876396 system_pods.go:61] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.763509  876396 system_pods.go:74] duration metric: took 6.051168ms to wait for pod list to return data ...
	I1114 15:59:49.763518  876396 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:49.776313  876396 default_sa.go:45] found service account: "default"
	I1114 15:59:49.776341  876396 default_sa.go:55] duration metric: took 12.814566ms for default service account to be created ...
	I1114 15:59:49.776351  876396 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:49.782462  876396 system_pods.go:86] 2 kube-system pods found
	I1114 15:59:49.782502  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.782518  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.782544  876396 retry.go:31] will retry after 311.640315ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.157150  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368304542s)
	I1114 15:59:50.157269  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157286  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.157688  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.157711  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.157730  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157743  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.158219  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.158270  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.169219  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.169264  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.169275  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.169282  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending
	I1114 15:59:50.169304  876396 retry.go:31] will retry after 335.621385ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.357400  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.370764048s)
	I1114 15:59:50.357474  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.357494  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.359782  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.359789  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.359811  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.359829  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.359840  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.360228  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.360264  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.360285  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.360333  876396 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-842105"
	I1114 15:59:50.362545  876396 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:59:50.364302  876396 addons.go:502] enable addons completed in 1.834168315s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:59:50.616547  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.616597  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.616608  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.616623  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.616645  876396 retry.go:31] will retry after 349.737645ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.971245  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.971286  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.971298  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.971312  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.971333  876396 retry.go:31] will retry after 562.981893ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:51.541777  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:51.541822  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:51.541849  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:51.541862  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:51.541870  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:51.541892  876396 retry.go:31] will retry after 617.692214ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:52.166157  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.166192  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.166199  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.166207  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.166211  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.166227  876396 retry.go:31] will retry after 671.968353ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:52.844235  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.844269  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.844276  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.844285  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.844290  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.844309  876396 retry.go:31] will retry after 955.353451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:53.814593  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:53.814626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:53.814636  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:53.814651  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:53.814661  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:53.814680  876396 retry.go:31] will retry after 1.306938168s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:55.127401  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:55.127436  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:55.127445  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:55.127457  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:55.127465  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:55.127488  876396 retry.go:31] will retry after 1.627615182s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.759304  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:56.759339  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:56.759345  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:56.759353  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:56.759358  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:56.759373  876396 retry.go:31] will retry after 2.046606031s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.936792  876065 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004387 seconds
	I1114 15:59:56.936992  876065 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:56.965969  876065 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:57.504894  876065 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:57.505171  876065 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-490998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:59:58.021451  876065 kubeadm.go:322] [bootstrap-token] Using token: 3x3ma3.qtutj9fi1nmgzc3r
	I1114 15:59:58.023064  876065 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:58.023220  876065 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:58.028334  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:59:58.039638  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:58.043783  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:58.048814  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:58.061419  876065 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:58.075996  876065 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:59:58.328245  876065 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:58.435170  876065 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:58.436684  876065 kubeadm.go:322] 
	I1114 15:59:58.436781  876065 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:58.436796  876065 kubeadm.go:322] 
	I1114 15:59:58.436889  876065 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:58.436932  876065 kubeadm.go:322] 
	I1114 15:59:58.436988  876065 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:58.437091  876065 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:58.437155  876065 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:58.437176  876065 kubeadm.go:322] 
	I1114 15:59:58.437231  876065 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:59:58.437239  876065 kubeadm.go:322] 
	I1114 15:59:58.437281  876065 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:59:58.437288  876065 kubeadm.go:322] 
	I1114 15:59:58.437353  876065 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:58.437449  876065 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:58.437564  876065 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:58.437574  876065 kubeadm.go:322] 
	I1114 15:59:58.437684  876065 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:59:58.437800  876065 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:58.437816  876065 kubeadm.go:322] 
	I1114 15:59:58.437937  876065 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438087  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:58.438116  876065 kubeadm.go:322] 	--control-plane 
	I1114 15:59:58.438124  876065 kubeadm.go:322] 
	I1114 15:59:58.438194  876065 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:58.438202  876065 kubeadm.go:322] 
	I1114 15:59:58.438267  876065 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438355  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:58.442217  876065 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:58.442251  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:59:58.442263  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:58.444078  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:58.445560  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:58.467849  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:58.501795  876065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:58.501941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.501965  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=no-preload-490998 minikube.k8s.io/updated_at=2023_11_14T15_59_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.557314  876065 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:58.891105  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:59.006867  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.811870  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:58.811905  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:58.811912  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:58.811920  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:58.811924  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:58.811939  876396 retry.go:31] will retry after 2.166453413s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:00.984597  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:00.984626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:00.984632  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:00.984638  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:00.984643  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:00.984661  876396 retry.go:31] will retry after 2.339496963s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:59.620843  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.120941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.621244  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.121507  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.621512  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.121367  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.621449  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.120920  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.620857  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.329034  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:03.329061  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:03.329067  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:03.329074  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:03.329078  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:03.329097  876396 retry.go:31] will retry after 3.593700907s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:06.929268  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:06.929308  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:06.929316  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:06.929327  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:06.929335  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:06.929357  876396 retry.go:31] will retry after 4.929780079s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:04.121245  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:04.620976  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.120894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.621609  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.121209  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.621322  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.121613  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.620968  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.121482  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.621166  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.121032  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.620894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.120992  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.621306  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.121427  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.299388  876065 kubeadm.go:1081] duration metric: took 12.79751335s to wait for elevateKubeSystemPrivileges.
	I1114 16:00:11.299429  876065 kubeadm.go:406] StartCluster complete in 5m8.910317864s
	I1114 16:00:11.299489  876065 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.299594  876065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:00:11.301841  876065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.302097  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 16:00:11.302144  876065 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 16:00:11.302251  876065 addons.go:69] Setting storage-provisioner=true in profile "no-preload-490998"
	I1114 16:00:11.302268  876065 addons.go:69] Setting default-storageclass=true in profile "no-preload-490998"
	I1114 16:00:11.302287  876065 addons.go:231] Setting addon storage-provisioner=true in "no-preload-490998"
	W1114 16:00:11.302301  876065 addons.go:240] addon storage-provisioner should already be in state true
	I1114 16:00:11.302296  876065 addons.go:69] Setting metrics-server=true in profile "no-preload-490998"
	I1114 16:00:11.302327  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:00:11.302346  876065 addons.go:231] Setting addon metrics-server=true in "no-preload-490998"
	W1114 16:00:11.302360  876065 addons.go:240] addon metrics-server should already be in state true
	I1114 16:00:11.302361  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302408  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302287  876065 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-490998"
	I1114 16:00:11.302858  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302926  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302942  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302956  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302863  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.303043  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.323089  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I1114 16:00:11.323101  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1114 16:00:11.323750  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.323807  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.324339  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324362  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324554  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324577  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324806  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I1114 16:00:11.325059  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325120  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325172  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.325617  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.325652  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326120  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.326138  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.326359  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.326398  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326499  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.326665  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.330090  876065 addons.go:231] Setting addon default-storageclass=true in "no-preload-490998"
	W1114 16:00:11.330115  876065 addons.go:240] addon default-storageclass should already be in state true
	I1114 16:00:11.330144  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.330381  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.330415  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.347198  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I1114 16:00:11.347385  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I1114 16:00:11.347562  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I1114 16:00:11.347721  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347785  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347897  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.348216  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348232  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348346  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348358  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348366  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348370  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348593  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348729  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348878  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348947  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349143  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349223  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.349270  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.351308  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.353786  876065 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 16:00:11.352409  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.355097  876065 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.355119  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 16:00:11.355141  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.356613  876065 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 16:00:11.357928  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 16:00:11.357949  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 16:00:11.357969  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.358548  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359421  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.359450  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359652  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.359922  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.360221  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.360379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.362075  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362508  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.362532  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362831  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.363041  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.363234  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.363390  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.379820  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I1114 16:00:11.380297  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.380905  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.380935  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.381326  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.381573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.383433  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.383722  876065 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.383741  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 16:00:11.383762  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.386432  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.386813  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.386845  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.387062  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.387311  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.387490  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.387661  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.450418  876065 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-490998" context rescaled to 1 replicas
	I1114 16:00:11.450472  876065 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:00:11.452499  876065 out.go:177] * Verifying Kubernetes components...
	I1114 16:00:11.864833  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:11.864867  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:11.864875  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:11.864884  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:11.864891  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:11.864918  876396 retry.go:31] will retry after 6.141765036s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:11.454141  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:11.560863  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.582400  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 16:00:11.582423  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 16:00:11.596910  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.626625  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 16:00:11.626652  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 16:00:11.634166  876065 node_ready.go:35] waiting up to 6m0s for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.634309  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 16:00:11.706391  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.706421  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 16:00:11.737914  876065 node_ready.go:49] node "no-preload-490998" has status "Ready":"True"
	I1114 16:00:11.737955  876065 node_ready.go:38] duration metric: took 103.74965ms waiting for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.737969  876065 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:11.795522  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.910850  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:13.838426  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.277507449s)
	I1114 16:00:13.838488  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838481  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.241527225s)
	I1114 16:00:13.838530  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838555  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838501  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838599  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.204200469s)
	I1114 16:00:13.838636  876065 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1114 16:00:13.838941  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.838992  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839001  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839008  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839016  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.839032  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839047  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839057  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839066  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841315  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841335  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.841398  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841418  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855083  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.059516605s)
	I1114 16:00:13.855146  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855169  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855524  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855572  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855588  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855600  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855612  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855921  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855949  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855961  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855979  876065 addons.go:467] Verifying addon metrics-server=true in "no-preload-490998"
	I1114 16:00:13.864145  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.864168  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.864444  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.864480  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.864491  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.867459  876065 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1114 16:00:13.868861  876065 addons.go:502] enable addons completed in 2.566733189s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1114 16:00:14.067240  876065 pod_ready.go:97] error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067289  876065 pod_ready.go:81] duration metric: took 2.15639988s waiting for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	E1114 16:00:14.067306  876065 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067315  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140385  876065 pod_ready.go:92] pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.140412  876065 pod_ready.go:81] duration metric: took 2.07308909s waiting for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140422  876065 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145818  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.145837  876065 pod_ready.go:81] duration metric: took 5.409163ms waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145845  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150850  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.150868  876065 pod_ready.go:81] duration metric: took 5.017013ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150877  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155895  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.155919  876065 pod_ready.go:81] duration metric: took 5.034132ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155931  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254239  876065 pod_ready.go:92] pod "kube-proxy-9nc8j" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.254270  876065 pod_ready.go:81] duration metric: took 98.331009ms waiting for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254282  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653014  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.653041  876065 pod_ready.go:81] duration metric: took 398.751468ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653049  876065 pod_ready.go:38] duration metric: took 4.915065516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:16.653066  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 16:00:16.653118  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 16:00:16.670396  876065 api_server.go:72] duration metric: took 5.219889322s to wait for apiserver process to appear ...
	I1114 16:00:16.670430  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 16:00:16.670450  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 16:00:16.675936  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 16:00:16.677570  876065 api_server.go:141] control plane version: v1.28.3
	I1114 16:00:16.677592  876065 api_server.go:131] duration metric: took 7.155742ms to wait for apiserver health ...
	I1114 16:00:16.677601  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 16:00:16.858468  876065 system_pods.go:59] 8 kube-system pods found
	I1114 16:00:16.858500  876065 system_pods.go:61] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:16.858505  876065 system_pods.go:61] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:16.858509  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:16.858514  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:16.858518  876065 system_pods.go:61] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:16.858522  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:16.858529  876065 system_pods.go:61] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:16.858534  876065 system_pods.go:61] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:16.858543  876065 system_pods.go:74] duration metric: took 180.935707ms to wait for pod list to return data ...
	I1114 16:00:16.858551  876065 default_sa.go:34] waiting for default service account to be created ...
	I1114 16:00:17.053423  876065 default_sa.go:45] found service account: "default"
	I1114 16:00:17.053478  876065 default_sa.go:55] duration metric: took 194.91891ms for default service account to be created ...
	I1114 16:00:17.053491  876065 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 16:00:17.256504  876065 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:17.256539  876065 system_pods.go:89] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:17.256547  876065 system_pods.go:89] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:17.256554  876065 system_pods.go:89] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:17.256561  876065 system_pods.go:89] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:17.256567  876065 system_pods.go:89] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:17.256572  876065 system_pods.go:89] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:17.256582  876065 system_pods.go:89] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:17.256589  876065 system_pods.go:89] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:17.256602  876065 system_pods.go:126] duration metric: took 203.104027ms to wait for k8s-apps to be running ...
	I1114 16:00:17.256615  876065 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:17.256682  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:17.273098  876065 system_svc.go:56] duration metric: took 16.455935ms WaitForService to wait for kubelet.
	I1114 16:00:17.273135  876065 kubeadm.go:581] duration metric: took 5.822636312s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:17.273162  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:17.453601  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:17.453635  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:17.453675  876065 node_conditions.go:105] duration metric: took 180.505934ms to run NodePressure ...
	I1114 16:00:17.453692  876065 start.go:228] waiting for startup goroutines ...
	I1114 16:00:17.453706  876065 start.go:233] waiting for cluster config update ...
	I1114 16:00:17.453748  876065 start.go:242] writing updated cluster config ...
	I1114 16:00:17.454022  876065 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:17.505999  876065 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 16:00:17.509514  876065 out.go:177] * Done! kubectl is now configured to use "no-preload-490998" cluster and "default" namespace by default
	I1114 16:00:18.012940  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:18.012980  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:18.012988  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:18.012998  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:18.013007  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:18.013032  876396 retry.go:31] will retry after 7.087138718s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:25.105773  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:25.105804  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:25.105809  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:25.105817  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:25.105822  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:25.105842  876396 retry.go:31] will retry after 8.539395127s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:33.651084  876396 system_pods.go:86] 6 kube-system pods found
	I1114 16:00:33.651116  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:33.651121  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:33.651125  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:33.651129  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:33.651136  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:33.651141  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:33.651159  876396 retry.go:31] will retry after 10.428154724s: missing components: etcd, kube-apiserver
	I1114 16:00:44.086463  876396 system_pods.go:86] 7 kube-system pods found
	I1114 16:00:44.086496  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:44.086501  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:44.086506  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:44.086511  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:44.086515  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:44.086522  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:44.086527  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:44.086546  876396 retry.go:31] will retry after 10.535877375s: missing components: kube-apiserver
	I1114 16:00:54.631194  876396 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:54.631230  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:54.631237  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:54.631244  876396 system_pods.go:89] "kube-apiserver-old-k8s-version-842105" [3035c074-63ca-4b23-a375-415210397d17] Running
	I1114 16:00:54.631252  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:54.631259  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:54.631265  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:54.631275  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:54.631291  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:54.631304  876396 system_pods.go:126] duration metric: took 1m4.854946282s to wait for k8s-apps to be running ...
	I1114 16:00:54.631317  876396 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:54.631470  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:54.648616  876396 system_svc.go:56] duration metric: took 17.286024ms WaitForService to wait for kubelet.
	I1114 16:00:54.648650  876396 kubeadm.go:581] duration metric: took 1m6.071350783s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:54.648677  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:54.652020  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:54.652055  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:54.652069  876396 node_conditions.go:105] duration metric: took 3.385579ms to run NodePressure ...
	I1114 16:00:54.652085  876396 start.go:228] waiting for startup goroutines ...
	I1114 16:00:54.652093  876396 start.go:233] waiting for cluster config update ...
	I1114 16:00:54.652106  876396 start.go:242] writing updated cluster config ...
	I1114 16:00:54.652418  876396 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:54.706394  876396 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1114 16:00:54.708374  876396 out.go:177] 
	W1114 16:00:54.709776  876396 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1114 16:00:54.711177  876396 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1114 16:00:54.712775  876396 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-842105" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:53:47 UTC, ends at Tue 2023-11-14 16:09:56 UTC. --
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.375028421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978196375008275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=ecc816c4-f91e-435b-b2a6-ba42c4c2b1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.375499298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=857ec530-14cb-4e42-ae8b-04c185578e2e name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.375577622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=857ec530-14cb-4e42-ae8b-04c185578e2e name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.375797293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382,PodSandboxId:ca10a67a4f78c654ce8b0ed74a4d3be2b88936c62ca8fcb278d572cdf8b873ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977591330594748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99f6a36-1296-455c-bb51-eaeb68fba6c5,},Annotations:map[string]string{io.kubernetes.container.hash: f5d640d2,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439,PodSandboxId:8264d893bf6a18de0d6ff816446cec8ed3e1aaa2bbc520a9abd2f31a31768c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699977591120429409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g86p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afa19fc-9d8c-4ca9-9a51-2f7d13661718,},Annotations:map[string]string{io.kubernetes.container.hash: cf7228c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e,PodSandboxId:07a302208d0d1810425825fcbe69c2fb455033b5877caccade7f7647e66129f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699977590249897630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8855d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d136f9-de29-41cf-8df1-fdcbedcc30e6,},Annotations:map[string]string{io.kubernetes.container.hash: daf40a87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482,PodSandboxId:a5cbf7428fdebf74c2cf379b86ed423af45bb0fcd7531fdbc284c2c35c764dbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699977565038251759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f38130cc1f016f57dfa3cf4c3bae58,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4a07c132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc,PodSandboxId:649683159c97987570092438bd834ab4022b7ec9db3f668dd54b72bd7b991ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699977564053580723,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d,PodSandboxId:861602a1268ed7d92311f2f94d2933377965991a964bdf4f81835586f6dd4779,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699977563726163838,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130c9db229ff9707d2469539a210852,},Annotations:map[string]string{io.kubern
etes.container.hash: cd420de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31,PodSandboxId:b05616eb29a666bf26433d7cc1078f4b26b2287dab4faf96fedfec9d8fde5942,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699977563684042593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=857ec530-14cb-4e42-ae8b-04c185578e2e name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.426219599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=55afa5e0-d262-403a-b51c-a495ee700f4c name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.426327737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=55afa5e0-d262-403a-b51c-a495ee700f4c name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.428251220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fff921af-53b3-40b7-921e-2ac9c41d2005 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.428834897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978196428817578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=fff921af-53b3-40b7-921e-2ac9c41d2005 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.429608572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6583ccd5-d6a3-4ab7-850b-c0c80ee50c77 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.429779984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6583ccd5-d6a3-4ab7-850b-c0c80ee50c77 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.429977627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382,PodSandboxId:ca10a67a4f78c654ce8b0ed74a4d3be2b88936c62ca8fcb278d572cdf8b873ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977591330594748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99f6a36-1296-455c-bb51-eaeb68fba6c5,},Annotations:map[string]string{io.kubernetes.container.hash: f5d640d2,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439,PodSandboxId:8264d893bf6a18de0d6ff816446cec8ed3e1aaa2bbc520a9abd2f31a31768c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699977591120429409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g86p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afa19fc-9d8c-4ca9-9a51-2f7d13661718,},Annotations:map[string]string{io.kubernetes.container.hash: cf7228c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e,PodSandboxId:07a302208d0d1810425825fcbe69c2fb455033b5877caccade7f7647e66129f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699977590249897630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8855d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d136f9-de29-41cf-8df1-fdcbedcc30e6,},Annotations:map[string]string{io.kubernetes.container.hash: daf40a87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482,PodSandboxId:a5cbf7428fdebf74c2cf379b86ed423af45bb0fcd7531fdbc284c2c35c764dbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699977565038251759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f38130cc1f016f57dfa3cf4c3bae58,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4a07c132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc,PodSandboxId:649683159c97987570092438bd834ab4022b7ec9db3f668dd54b72bd7b991ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699977564053580723,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d,PodSandboxId:861602a1268ed7d92311f2f94d2933377965991a964bdf4f81835586f6dd4779,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699977563726163838,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130c9db229ff9707d2469539a210852,},Annotations:map[string]string{io.kubern
etes.container.hash: cd420de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31,PodSandboxId:b05616eb29a666bf26433d7cc1078f4b26b2287dab4faf96fedfec9d8fde5942,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699977563684042593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6583ccd5-d6a3-4ab7-850b-c0c80ee50c77 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.474123314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=03c42b85-a0ec-4b0d-99e3-6106da7b48dc name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.474215436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=03c42b85-a0ec-4b0d-99e3-6106da7b48dc name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.475748717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=37a4da06-3604-4980-bc0d-1e7c786a377f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.476513192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978196476491073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=37a4da06-3604-4980-bc0d-1e7c786a377f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.477590524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=140e933b-d4f1-4fac-b5a0-6e39c78b896b name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.477697903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=140e933b-d4f1-4fac-b5a0-6e39c78b896b name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.477850220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382,PodSandboxId:ca10a67a4f78c654ce8b0ed74a4d3be2b88936c62ca8fcb278d572cdf8b873ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977591330594748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99f6a36-1296-455c-bb51-eaeb68fba6c5,},Annotations:map[string]string{io.kubernetes.container.hash: f5d640d2,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439,PodSandboxId:8264d893bf6a18de0d6ff816446cec8ed3e1aaa2bbc520a9abd2f31a31768c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699977591120429409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g86p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afa19fc-9d8c-4ca9-9a51-2f7d13661718,},Annotations:map[string]string{io.kubernetes.container.hash: cf7228c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e,PodSandboxId:07a302208d0d1810425825fcbe69c2fb455033b5877caccade7f7647e66129f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699977590249897630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8855d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d136f9-de29-41cf-8df1-fdcbedcc30e6,},Annotations:map[string]string{io.kubernetes.container.hash: daf40a87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482,PodSandboxId:a5cbf7428fdebf74c2cf379b86ed423af45bb0fcd7531fdbc284c2c35c764dbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699977565038251759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f38130cc1f016f57dfa3cf4c3bae58,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4a07c132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc,PodSandboxId:649683159c97987570092438bd834ab4022b7ec9db3f668dd54b72bd7b991ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699977564053580723,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d,PodSandboxId:861602a1268ed7d92311f2f94d2933377965991a964bdf4f81835586f6dd4779,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699977563726163838,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130c9db229ff9707d2469539a210852,},Annotations:map[string]string{io.kubern
etes.container.hash: cd420de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31,PodSandboxId:b05616eb29a666bf26433d7cc1078f4b26b2287dab4faf96fedfec9d8fde5942,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699977563684042593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=140e933b-d4f1-4fac-b5a0-6e39c78b896b name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.516764300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2f76a0d8-8ba3-4c67-8d98-d725f7fcfe45 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.516846546Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2f76a0d8-8ba3-4c67-8d98-d725f7fcfe45 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.518733945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=48f94235-5213-47fe-9d04-3c4a32a2cdb8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.519161943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978196519147021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=48f94235-5213-47fe-9d04-3c4a32a2cdb8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.519737318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=406004ff-2ee2-425a-951c-ef01fe40fb15 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.519807508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=406004ff-2ee2-425a-951c-ef01fe40fb15 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:09:56 old-k8s-version-842105 crio[733]: time="2023-11-14 16:09:56.519962345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382,PodSandboxId:ca10a67a4f78c654ce8b0ed74a4d3be2b88936c62ca8fcb278d572cdf8b873ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977591330594748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99f6a36-1296-455c-bb51-eaeb68fba6c5,},Annotations:map[string]string{io.kubernetes.container.hash: f5d640d2,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439,PodSandboxId:8264d893bf6a18de0d6ff816446cec8ed3e1aaa2bbc520a9abd2f31a31768c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699977591120429409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g86p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afa19fc-9d8c-4ca9-9a51-2f7d13661718,},Annotations:map[string]string{io.kubernetes.container.hash: cf7228c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e,PodSandboxId:07a302208d0d1810425825fcbe69c2fb455033b5877caccade7f7647e66129f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699977590249897630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8855d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d136f9-de29-41cf-8df1-fdcbedcc30e6,},Annotations:map[string]string{io.kubernetes.container.hash: daf40a87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482,PodSandboxId:a5cbf7428fdebf74c2cf379b86ed423af45bb0fcd7531fdbc284c2c35c764dbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699977565038251759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f38130cc1f016f57dfa3cf4c3bae58,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4a07c132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc,PodSandboxId:649683159c97987570092438bd834ab4022b7ec9db3f668dd54b72bd7b991ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699977564053580723,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d,PodSandboxId:861602a1268ed7d92311f2f94d2933377965991a964bdf4f81835586f6dd4779,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699977563726163838,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130c9db229ff9707d2469539a210852,},Annotations:map[string]string{io.kubern
etes.container.hash: cd420de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31,PodSandboxId:b05616eb29a666bf26433d7cc1078f4b26b2287dab4faf96fedfec9d8fde5942,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699977563684042593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=406004ff-2ee2-425a-951c-ef01fe40fb15 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e4eb788285cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   ca10a67a4f78c       storage-provisioner
	797c248b411c1       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   8264d893bf6a1       kube-proxy-g86p9
	c5e946600b40d       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   07a302208d0d1       coredns-5644d7b6d9-8855d
	7c9486a8d7813       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   a5cbf7428fdeb       etcd-old-k8s-version-842105
	3cf9dfdbec257       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   649683159c979       kube-scheduler-old-k8s-version-842105
	bf630d3e93819       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   861602a1268ed       kube-apiserver-old-k8s-version-842105
	19215628874ec       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   b05616eb29a66       kube-controller-manager-old-k8s-version-842105
	
	* 
	* ==> coredns [c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e] <==
	* .:53
	2023-11-14T15:59:50.679Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-14T15:59:50.679Z [INFO] CoreDNS-1.6.2
	2023-11-14T15:59:50.679Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-14T16:00:16.986Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2023-11-14T16:00:16.996Z [INFO] 127.0.0.1:43774 - 49059 "HINFO IN 4264295416034289763.7366177133355996784. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010283424s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-842105
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-842105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=old-k8s-version-842105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_59_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:59:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 16:09:29 +0000   Tue, 14 Nov 2023 15:59:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 16:09:29 +0000   Tue, 14 Nov 2023 15:59:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 16:09:29 +0000   Tue, 14 Nov 2023 15:59:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 16:09:29 +0000   Tue, 14 Nov 2023 15:59:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.151
	  Hostname:    old-k8s-version-842105
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 02cec9847ade4e5f882c0d8ba9945a51
	 System UUID:                02cec984-7ade-4e5f-882c-0d8ba9945a51
	 Boot ID:                    c641e42a-9e20-4877-8a02-69dac1e980b3
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-8855d                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-842105                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                kube-apiserver-old-k8s-version-842105             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                kube-controller-manager-old-k8s-version-842105    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                kube-proxy-g86p9                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-842105             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                metrics-server-74d5856cc6-8cxxt                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-842105     Node old-k8s-version-842105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-842105     Node old-k8s-version-842105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-842105     Node old-k8s-version-842105 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-842105  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov14 15:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074304] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.516011] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.467833] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149007] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.411302] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.821500] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.102389] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.154631] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.110901] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +0.214191] systemd-fstab-generator[718]: Ignoring "noauto" for root device
	[Nov14 15:54] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.458820] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +15.996845] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.356906] hrtimer: interrupt took 10215455 ns
	[  +0.379719] kauditd_printk_skb: 5 callbacks suppressed
	[Nov14 15:59] systemd-fstab-generator[3223]: Ignoring "noauto" for root device
	[  +1.263041] kauditd_printk_skb: 8 callbacks suppressed
	[ +36.187593] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482] <==
	* 2023-11-14 15:59:25.181065 I | raft: cec33aa8f0724833 became follower at term 0
	2023-11-14 15:59:25.181090 I | raft: newRaft cec33aa8f0724833 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-14 15:59:25.181109 I | raft: cec33aa8f0724833 became follower at term 1
	2023-11-14 15:59:25.190204 W | auth: simple token is not cryptographically signed
	2023-11-14 15:59:25.195821 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-14 15:59:25.197979 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-14 15:59:25.198123 I | embed: listening for metrics on http://192.168.72.151:2381
	2023-11-14 15:59:25.198379 I | etcdserver: cec33aa8f0724833 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-14 15:59:25.198987 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-14 15:59:25.199281 I | etcdserver/membership: added member cec33aa8f0724833 [https://192.168.72.151:2380] to cluster 31c137043c99215d
	2023-11-14 15:59:25.381606 I | raft: cec33aa8f0724833 is starting a new election at term 1
	2023-11-14 15:59:25.381790 I | raft: cec33aa8f0724833 became candidate at term 2
	2023-11-14 15:59:25.381818 I | raft: cec33aa8f0724833 received MsgVoteResp from cec33aa8f0724833 at term 2
	2023-11-14 15:59:25.381840 I | raft: cec33aa8f0724833 became leader at term 2
	2023-11-14 15:59:25.381857 I | raft: raft.node: cec33aa8f0724833 elected leader cec33aa8f0724833 at term 2
	2023-11-14 15:59:25.382351 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-14 15:59:25.382728 I | etcdserver: published {Name:old-k8s-version-842105 ClientURLs:[https://192.168.72.151:2379]} to cluster 31c137043c99215d
	2023-11-14 15:59:25.382892 I | embed: ready to serve client requests
	2023-11-14 15:59:25.383941 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-14 15:59:25.384024 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-14 15:59:25.384050 I | embed: ready to serve client requests
	2023-11-14 15:59:25.385102 I | embed: serving client requests on 192.168.72.151:2379
	2023-11-14 15:59:25.390914 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-14 16:09:25.516830 I | mvcc: store.index: compact 650
	2023-11-14 16:09:25.519750 I | mvcc: finished scheduled compaction at 650 (took 2.231208ms)
	
	* 
	* ==> kernel <==
	*  16:09:56 up 16 min,  0 users,  load average: 0.79, 0.46, 0.26
	Linux old-k8s-version-842105 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d] <==
	* I1114 16:02:51.894498       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:02:51.894614       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:02:51.894734       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:02:51.894747       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:04:29.753947       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:04:29.754314       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:04:29.754399       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:04:29.754434       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:05:29.754736       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:05:29.754866       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:05:29.754972       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:05:29.755012       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:07:29.755552       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:07:29.755731       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:07:29.755820       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:07:29.755833       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:09:29.757205       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:09:29.757584       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:09:29.757740       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:09:29.757775       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31] <==
	* E1114 16:03:51.516983       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:04:04.827394       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:04:21.769131       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:04:36.829530       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:04:52.021417       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:05:08.831851       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:05:22.273374       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:05:40.834089       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:05:52.525596       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:06:12.836053       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:06:22.777728       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:06:44.838709       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:06:53.030278       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:07:16.840999       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:07:23.282836       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:07:48.843452       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:07:53.534905       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:08:20.845819       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:08:23.787155       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:08:52.848065       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:08:54.039845       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1114 16:09:24.291484       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:09:24.850142       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:09:54.543249       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:09:56.852297       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439] <==
	* W1114 15:59:51.397331       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1114 15:59:51.413250       1 node.go:135] Successfully retrieved node IP: 192.168.72.151
	I1114 15:59:51.413413       1 server_others.go:149] Using iptables Proxier.
	I1114 15:59:51.415005       1 server.go:529] Version: v1.16.0
	I1114 15:59:51.419594       1 config.go:313] Starting service config controller
	I1114 15:59:51.420190       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1114 15:59:51.420352       1 config.go:131] Starting endpoints config controller
	I1114 15:59:51.420367       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1114 15:59:51.520970       1 shared_informer.go:204] Caches are synced for service config 
	I1114 15:59:51.521309       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc] <==
	* W1114 15:59:28.757576       1 authentication.go:79] Authentication is disabled
	I1114 15:59:28.757750       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1114 15:59:28.759590       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1114 15:59:28.809845       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 15:59:28.809967       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 15:59:28.810039       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 15:59:28.810102       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 15:59:28.810166       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:59:28.810196       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 15:59:28.815733       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 15:59:28.815921       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 15:59:28.816015       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:28.816149       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 15:59:28.824882       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:29.811494       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 15:59:29.818048       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 15:59:29.819553       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 15:59:29.820520       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 15:59:29.827024       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:59:29.830465       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 15:59:29.831577       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 15:59:29.832464       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 15:59:29.835132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:29.836907       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 15:59:29.838101       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:53:47 UTC, ends at Tue 2023-11-14 16:09:57 UTC. --
	Nov 14 16:05:29 old-k8s-version-842105 kubelet[3241]: E1114 16:05:29.676031    3241 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 14 16:05:29 old-k8s-version-842105 kubelet[3241]: E1114 16:05:29.676127    3241 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 14 16:05:29 old-k8s-version-842105 kubelet[3241]: E1114 16:05:29.676186    3241 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 14 16:05:29 old-k8s-version-842105 kubelet[3241]: E1114 16:05:29.676217    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 14 16:05:41 old-k8s-version-842105 kubelet[3241]: E1114 16:05:41.656403    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:05:54 old-k8s-version-842105 kubelet[3241]: E1114 16:05:54.656066    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:06:08 old-k8s-version-842105 kubelet[3241]: E1114 16:06:08.657046    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:06:21 old-k8s-version-842105 kubelet[3241]: E1114 16:06:21.656545    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:06:32 old-k8s-version-842105 kubelet[3241]: E1114 16:06:32.658408    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:06:45 old-k8s-version-842105 kubelet[3241]: E1114 16:06:45.656066    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:06:59 old-k8s-version-842105 kubelet[3241]: E1114 16:06:59.656145    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:07:13 old-k8s-version-842105 kubelet[3241]: E1114 16:07:13.656604    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:07:25 old-k8s-version-842105 kubelet[3241]: E1114 16:07:25.656270    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:07:38 old-k8s-version-842105 kubelet[3241]: E1114 16:07:38.656097    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:07:49 old-k8s-version-842105 kubelet[3241]: E1114 16:07:49.656004    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:08:04 old-k8s-version-842105 kubelet[3241]: E1114 16:08:04.656806    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:08:17 old-k8s-version-842105 kubelet[3241]: E1114 16:08:17.660159    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:08:30 old-k8s-version-842105 kubelet[3241]: E1114 16:08:30.656153    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:08:43 old-k8s-version-842105 kubelet[3241]: E1114 16:08:43.656366    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:08:54 old-k8s-version-842105 kubelet[3241]: E1114 16:08:54.655938    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:09:07 old-k8s-version-842105 kubelet[3241]: E1114 16:09:07.656229    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:09:18 old-k8s-version-842105 kubelet[3241]: E1114 16:09:18.656816    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:09:22 old-k8s-version-842105 kubelet[3241]: E1114 16:09:22.735981    3241 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 14 16:09:32 old-k8s-version-842105 kubelet[3241]: E1114 16:09:32.656311    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:09:44 old-k8s-version-842105 kubelet[3241]: E1114 16:09:44.656781    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382] <==
	* I1114 15:59:51.505425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 15:59:51.516585       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 15:59:51.516917       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 15:59:51.529153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 15:59:51.529367       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842105_dc661fd8-34fe-46bb-bc2d-b1a1df28b409!
	I1114 15:59:51.530556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"401edbd4-27d4-4297-b70e-a42b51e34980", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-842105_dc661fd8-34fe-46bb-bc2d-b1a1df28b409 became leader
	I1114 15:59:51.630807       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842105_dc661fd8-34fe-46bb-bc2d-b1a1df28b409!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-842105 -n old-k8s-version-842105
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-842105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-8cxxt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-842105 describe pod metrics-server-74d5856cc6-8cxxt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-842105 describe pod metrics-server-74d5856cc6-8cxxt: exit status 1 (67.679883ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-8cxxt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-842105 describe pod metrics-server-74d5856cc6-8cxxt: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (392.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-279880 -n embed-certs-279880
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-14 16:14:31.556128589 +0000 UTC m=+5739.086313559
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-279880 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-279880 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.845µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-279880 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-279880 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-279880 logs -n 25: (2.49734652s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-331502 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | disable-driver-mounts-331502                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:47 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-490998             | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-279880            | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-842105        | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-529430  | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC | 14 Nov 23 15:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC |                     |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-490998                  | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-279880                 | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 15:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-842105             | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-529430       | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 15:59 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 16:13 UTC | 14 Nov 23 16:13 UTC |
	| start   | -p newest-cni-161256 --memory=2200 --alsologtostderr   | newest-cni-161256            | jenkins | v1.32.0 | 14 Nov 23 16:13 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 16:14 UTC | 14 Nov 23 16:14 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 16:13:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 16:13:57.784836  881469 out.go:296] Setting OutFile to fd 1 ...
	I1114 16:13:57.785128  881469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 16:13:57.785138  881469 out.go:309] Setting ErrFile to fd 2...
	I1114 16:13:57.785146  881469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 16:13:57.785348  881469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 16:13:57.785980  881469 out.go:303] Setting JSON to false
	I1114 16:13:57.787108  881469 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":46590,"bootTime":1699931848,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 16:13:57.787173  881469 start.go:138] virtualization: kvm guest
	I1114 16:13:57.789820  881469 out.go:177] * [newest-cni-161256] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 16:13:57.791257  881469 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 16:13:57.791324  881469 notify.go:220] Checking for updates...
	I1114 16:13:57.792683  881469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 16:13:57.794219  881469 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:13:57.795667  881469 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:57.797148  881469 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 16:13:57.798544  881469 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 16:13:57.800427  881469 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800574  881469 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800696  881469 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800869  881469 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 16:13:57.840976  881469 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 16:13:57.842309  881469 start.go:298] selected driver: kvm2
	I1114 16:13:57.842324  881469 start.go:902] validating driver "kvm2" against <nil>
	I1114 16:13:57.842335  881469 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 16:13:57.843244  881469 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 16:13:57.843340  881469 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 16:13:57.858215  881469 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 16:13:57.858276  881469 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1114 16:13:57.858298  881469 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1114 16:13:57.858505  881469 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1114 16:13:57.858616  881469 cni.go:84] Creating CNI manager for ""
	I1114 16:13:57.858636  881469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 16:13:57.858647  881469 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1114 16:13:57.858656  881469 start_flags.go:323] config:
	{Name:newest-cni-161256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 16:13:57.858813  881469 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 16:13:57.861088  881469 out.go:177] * Starting control plane node newest-cni-161256 in cluster newest-cni-161256
	I1114 16:13:57.862595  881469 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 16:13:57.862632  881469 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 16:13:57.862691  881469 cache.go:56] Caching tarball of preloaded images
	I1114 16:13:57.862796  881469 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 16:13:57.862812  881469 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 16:13:57.862916  881469 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json ...
	I1114 16:13:57.862949  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json: {Name:mka288a2361f2be2d9a752ce4e344331e93a7d9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:13:57.863168  881469 start.go:365] acquiring machines lock for newest-cni-161256: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 16:13:57.863222  881469 start.go:369] acquired machines lock for "newest-cni-161256" in 33.515µs
	I1114 16:13:57.863248  881469 start.go:93] Provisioning new machine with config: &{Name:newest-cni-161256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:13:57.863331  881469 start.go:125] createHost starting for "" (driver="kvm2")
	I1114 16:13:57.865053  881469 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1114 16:13:57.865182  881469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:13:57.865231  881469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:13:57.879338  881469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I1114 16:13:57.879746  881469 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:13:57.880279  881469 main.go:141] libmachine: Using API Version  1
	I1114 16:13:57.880306  881469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:13:57.880723  881469 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:13:57.880962  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:13:57.881183  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:13:57.881364  881469 start.go:159] libmachine.API.Create for "newest-cni-161256" (driver="kvm2")
	I1114 16:13:57.881402  881469 client.go:168] LocalClient.Create starting
	I1114 16:13:57.881465  881469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 16:13:57.881513  881469 main.go:141] libmachine: Decoding PEM data...
	I1114 16:13:57.881534  881469 main.go:141] libmachine: Parsing certificate...
	I1114 16:13:57.881631  881469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 16:13:57.881666  881469 main.go:141] libmachine: Decoding PEM data...
	I1114 16:13:57.881685  881469 main.go:141] libmachine: Parsing certificate...
	I1114 16:13:57.881723  881469 main.go:141] libmachine: Running pre-create checks...
	I1114 16:13:57.881758  881469 main.go:141] libmachine: (newest-cni-161256) Calling .PreCreateCheck
	I1114 16:13:57.882257  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:13:57.882866  881469 main.go:141] libmachine: Creating machine...
	I1114 16:13:57.882890  881469 main.go:141] libmachine: (newest-cni-161256) Calling .Create
	I1114 16:13:57.883081  881469 main.go:141] libmachine: (newest-cni-161256) Creating KVM machine...
	I1114 16:13:57.884479  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found existing default KVM network
	I1114 16:13:57.885821  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.885625  881491 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7a:f8:83} reservation:<nil>}
	I1114 16:13:57.886569  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.886459  881491 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:58:bc} reservation:<nil>}
	I1114 16:13:57.887505  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.887399  881491 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:64:42} reservation:<nil>}
	I1114 16:13:57.888668  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.888578  881491 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e7120}
	I1114 16:13:57.894409  881469 main.go:141] libmachine: (newest-cni-161256) DBG | trying to create private KVM network mk-newest-cni-161256 192.168.72.0/24...
	I1114 16:13:57.973147  881469 main.go:141] libmachine: (newest-cni-161256) DBG | private KVM network mk-newest-cni-161256 192.168.72.0/24 created
	I1114 16:13:57.973201  881469 main.go:141] libmachine: (newest-cni-161256) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 ...
	I1114 16:13:57.973221  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.973079  881491 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:57.973318  881469 main.go:141] libmachine: (newest-cni-161256) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 16:13:57.973397  881469 main.go:141] libmachine: (newest-cni-161256) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 16:13:58.236968  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.236841  881491 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa...
	I1114 16:13:58.389420  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.389261  881491 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/newest-cni-161256.rawdisk...
	I1114 16:13:58.389453  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Writing magic tar header
	I1114 16:13:58.389471  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Writing SSH key tar header
	I1114 16:13:58.389480  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.389421  881491 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 ...
	I1114 16:13:58.389546  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256
	I1114 16:13:58.389602  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 (perms=drwx------)
	I1114 16:13:58.389630  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 16:13:58.389644  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 16:13:58.389655  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:58.389680  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 16:13:58.389693  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 16:13:58.389704  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 16:13:58.389718  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 16:13:58.389785  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 16:13:58.389810  881469 main.go:141] libmachine: (newest-cni-161256) Creating domain...
	I1114 16:13:58.389826  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 16:13:58.389844  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins
	I1114 16:13:58.389857  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home
	I1114 16:13:58.389872  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Skipping /home - not owner
	I1114 16:13:58.391223  881469 main.go:141] libmachine: (newest-cni-161256) define libvirt domain using xml: 
	I1114 16:13:58.391256  881469 main.go:141] libmachine: (newest-cni-161256) <domain type='kvm'>
	I1114 16:13:58.391270  881469 main.go:141] libmachine: (newest-cni-161256)   <name>newest-cni-161256</name>
	I1114 16:13:58.391280  881469 main.go:141] libmachine: (newest-cni-161256)   <memory unit='MiB'>2200</memory>
	I1114 16:13:58.391330  881469 main.go:141] libmachine: (newest-cni-161256)   <vcpu>2</vcpu>
	I1114 16:13:58.391364  881469 main.go:141] libmachine: (newest-cni-161256)   <features>
	I1114 16:13:58.391375  881469 main.go:141] libmachine: (newest-cni-161256)     <acpi/>
	I1114 16:13:58.391383  881469 main.go:141] libmachine: (newest-cni-161256)     <apic/>
	I1114 16:13:58.391392  881469 main.go:141] libmachine: (newest-cni-161256)     <pae/>
	I1114 16:13:58.391406  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.391420  881469 main.go:141] libmachine: (newest-cni-161256)   </features>
	I1114 16:13:58.391434  881469 main.go:141] libmachine: (newest-cni-161256)   <cpu mode='host-passthrough'>
	I1114 16:13:58.391461  881469 main.go:141] libmachine: (newest-cni-161256)   
	I1114 16:13:58.391472  881469 main.go:141] libmachine: (newest-cni-161256)   </cpu>
	I1114 16:13:58.391487  881469 main.go:141] libmachine: (newest-cni-161256)   <os>
	I1114 16:13:58.391502  881469 main.go:141] libmachine: (newest-cni-161256)     <type>hvm</type>
	I1114 16:13:58.391517  881469 main.go:141] libmachine: (newest-cni-161256)     <boot dev='cdrom'/>
	I1114 16:13:58.391528  881469 main.go:141] libmachine: (newest-cni-161256)     <boot dev='hd'/>
	I1114 16:13:58.391538  881469 main.go:141] libmachine: (newest-cni-161256)     <bootmenu enable='no'/>
	I1114 16:13:58.391549  881469 main.go:141] libmachine: (newest-cni-161256)   </os>
	I1114 16:13:58.391561  881469 main.go:141] libmachine: (newest-cni-161256)   <devices>
	I1114 16:13:58.391572  881469 main.go:141] libmachine: (newest-cni-161256)     <disk type='file' device='cdrom'>
	I1114 16:13:58.391609  881469 main.go:141] libmachine: (newest-cni-161256)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/boot2docker.iso'/>
	I1114 16:13:58.391636  881469 main.go:141] libmachine: (newest-cni-161256)       <target dev='hdc' bus='scsi'/>
	I1114 16:13:58.391662  881469 main.go:141] libmachine: (newest-cni-161256)       <readonly/>
	I1114 16:13:58.391680  881469 main.go:141] libmachine: (newest-cni-161256)     </disk>
	I1114 16:13:58.391697  881469 main.go:141] libmachine: (newest-cni-161256)     <disk type='file' device='disk'>
	I1114 16:13:58.391712  881469 main.go:141] libmachine: (newest-cni-161256)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 16:13:58.391744  881469 main.go:141] libmachine: (newest-cni-161256)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/newest-cni-161256.rawdisk'/>
	I1114 16:13:58.391763  881469 main.go:141] libmachine: (newest-cni-161256)       <target dev='hda' bus='virtio'/>
	I1114 16:13:58.391776  881469 main.go:141] libmachine: (newest-cni-161256)     </disk>
	I1114 16:13:58.391792  881469 main.go:141] libmachine: (newest-cni-161256)     <interface type='network'>
	I1114 16:13:58.391809  881469 main.go:141] libmachine: (newest-cni-161256)       <source network='mk-newest-cni-161256'/>
	I1114 16:13:58.391822  881469 main.go:141] libmachine: (newest-cni-161256)       <model type='virtio'/>
	I1114 16:13:58.391849  881469 main.go:141] libmachine: (newest-cni-161256)     </interface>
	I1114 16:13:58.391876  881469 main.go:141] libmachine: (newest-cni-161256)     <interface type='network'>
	I1114 16:13:58.391892  881469 main.go:141] libmachine: (newest-cni-161256)       <source network='default'/>
	I1114 16:13:58.391904  881469 main.go:141] libmachine: (newest-cni-161256)       <model type='virtio'/>
	I1114 16:13:58.391918  881469 main.go:141] libmachine: (newest-cni-161256)     </interface>
	I1114 16:13:58.391929  881469 main.go:141] libmachine: (newest-cni-161256)     <serial type='pty'>
	I1114 16:13:58.391939  881469 main.go:141] libmachine: (newest-cni-161256)       <target port='0'/>
	I1114 16:13:58.391951  881469 main.go:141] libmachine: (newest-cni-161256)     </serial>
	I1114 16:13:58.391977  881469 main.go:141] libmachine: (newest-cni-161256)     <console type='pty'>
	I1114 16:13:58.391998  881469 main.go:141] libmachine: (newest-cni-161256)       <target type='serial' port='0'/>
	I1114 16:13:58.392013  881469 main.go:141] libmachine: (newest-cni-161256)     </console>
	I1114 16:13:58.392024  881469 main.go:141] libmachine: (newest-cni-161256)     <rng model='virtio'>
	I1114 16:13:58.392038  881469 main.go:141] libmachine: (newest-cni-161256)       <backend model='random'>/dev/random</backend>
	I1114 16:13:58.392049  881469 main.go:141] libmachine: (newest-cni-161256)     </rng>
	I1114 16:13:58.392061  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.392074  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.392086  881469 main.go:141] libmachine: (newest-cni-161256)   </devices>
	I1114 16:13:58.392101  881469 main.go:141] libmachine: (newest-cni-161256) </domain>
	I1114 16:13:58.392123  881469 main.go:141] libmachine: (newest-cni-161256) 
	I1114 16:13:58.397370  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:8b:ec:96 in network default
	I1114 16:13:58.398066  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring networks are active...
	I1114 16:13:58.398113  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:13:58.398797  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring network default is active
	I1114 16:13:58.399287  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring network mk-newest-cni-161256 is active
	I1114 16:13:58.399958  881469 main.go:141] libmachine: (newest-cni-161256) Getting domain xml...
	I1114 16:13:58.400849  881469 main.go:141] libmachine: (newest-cni-161256) Creating domain...
	I1114 16:13:59.726283  881469 main.go:141] libmachine: (newest-cni-161256) Waiting to get IP...
	I1114 16:13:59.727449  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:13:59.727962  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:13:59.727986  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:59.727939  881491 retry.go:31] will retry after 279.361106ms: waiting for machine to come up
	I1114 16:14:00.009714  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.010197  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.010237  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.010159  881491 retry.go:31] will retry after 359.592157ms: waiting for machine to come up
	I1114 16:14:00.372007  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.372590  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.372624  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.372515  881491 retry.go:31] will retry after 324.730593ms: waiting for machine to come up
	I1114 16:14:00.698994  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.699575  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.699610  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.699489  881491 retry.go:31] will retry after 476.141432ms: waiting for machine to come up
	I1114 16:14:01.177324  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:01.177753  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:01.177783  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:01.177714  881491 retry.go:31] will retry after 693.627681ms: waiting for machine to come up
	I1114 16:14:01.872724  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:01.873311  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:01.873346  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:01.873237  881491 retry.go:31] will retry after 922.207125ms: waiting for machine to come up
	I1114 16:14:02.796995  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:02.797487  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:02.797515  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:02.797447  881491 retry.go:31] will retry after 828.947009ms: waiting for machine to come up
	I1114 16:14:03.627753  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:03.628173  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:03.628210  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:03.628118  881491 retry.go:31] will retry after 997.915404ms: waiting for machine to come up
	I1114 16:14:04.627128  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:04.627568  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:04.627602  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:04.627510  881491 retry.go:31] will retry after 1.497303924s: waiting for machine to come up
	I1114 16:14:06.126245  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:06.126708  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:06.126773  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:06.126683  881491 retry.go:31] will retry after 2.041273523s: waiting for machine to come up
	I1114 16:14:08.169598  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:08.170190  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:08.170229  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:08.170121  881491 retry.go:31] will retry after 1.842095296s: waiting for machine to come up
	I1114 16:14:10.015052  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:10.015611  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:10.015646  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:10.015549  881491 retry.go:31] will retry after 2.927670132s: waiting for machine to come up
	I1114 16:14:12.944720  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:12.945324  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:12.945360  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:12.945263  881491 retry.go:31] will retry after 3.702057643s: waiting for machine to come up
	I1114 16:14:16.650490  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:16.650958  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:16.650990  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:16.650908  881491 retry.go:31] will retry after 5.604460167s: waiting for machine to come up
	I1114 16:14:22.258010  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.258475  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has current primary IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.258533  881469 main.go:141] libmachine: (newest-cni-161256) Found IP for machine: 192.168.72.15
	I1114 16:14:22.258560  881469 main.go:141] libmachine: (newest-cni-161256) Reserving static IP address...
	I1114 16:14:22.258936  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find host DHCP lease matching {name: "newest-cni-161256", mac: "52:54:00:06:29:44", ip: "192.168.72.15"} in network mk-newest-cni-161256
	I1114 16:14:22.344546  881469 main.go:141] libmachine: (newest-cni-161256) Reserved static IP address: 192.168.72.15
	I1114 16:14:22.344599  881469 main.go:141] libmachine: (newest-cni-161256) Waiting for SSH to be available...
	I1114 16:14:22.344611  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Getting to WaitForSSH function...
	I1114 16:14:22.347942  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.348375  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.348409  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.348585  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using SSH client type: external
	I1114 16:14:22.348616  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa (-rw-------)
	I1114 16:14:22.348666  881469 main.go:141] libmachine: (newest-cni-161256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 16:14:22.348685  881469 main.go:141] libmachine: (newest-cni-161256) DBG | About to run SSH command:
	I1114 16:14:22.348794  881469 main.go:141] libmachine: (newest-cni-161256) DBG | exit 0
	I1114 16:14:22.444878  881469 main.go:141] libmachine: (newest-cni-161256) DBG | SSH cmd err, output: <nil>: 
	I1114 16:14:22.445251  881469 main.go:141] libmachine: (newest-cni-161256) KVM machine creation complete!
	I1114 16:14:22.445546  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:14:22.446255  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:22.446483  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:22.446698  881469 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 16:14:22.446723  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetState
	I1114 16:14:22.448178  881469 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 16:14:22.448199  881469 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 16:14:22.448209  881469 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 16:14:22.448240  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.451143  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.451592  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.451626  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.451815  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.452017  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.452188  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.452378  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.452632  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.453178  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.453198  881469 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 16:14:22.584113  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 16:14:22.584150  881469 main.go:141] libmachine: Detecting the provisioner...
	I1114 16:14:22.584162  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.587100  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.587496  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.587533  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.587647  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.587854  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.588086  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.588282  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.588472  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.588880  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.588894  881469 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 16:14:22.713853  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 16:14:22.714001  881469 main.go:141] libmachine: found compatible host: buildroot
	I1114 16:14:22.714021  881469 main.go:141] libmachine: Provisioning with buildroot...
	I1114 16:14:22.714035  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:22.714353  881469 buildroot.go:166] provisioning hostname "newest-cni-161256"
	I1114 16:14:22.714397  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:22.714634  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.717497  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.717871  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.717902  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.718002  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.718218  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.718401  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.718569  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.718809  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.719156  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.719179  881469 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-161256 && echo "newest-cni-161256" | sudo tee /etc/hostname
	I1114 16:14:22.862571  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-161256
	
	I1114 16:14:22.862597  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.865536  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.865784  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.865817  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.866066  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.866276  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.866445  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.866579  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.866744  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.867182  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.867203  881469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-161256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-161256/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-161256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 16:14:23.001359  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 16:14:23.001407  881469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 16:14:23.001464  881469 buildroot.go:174] setting up certificates
	I1114 16:14:23.001485  881469 provision.go:83] configureAuth start
	I1114 16:14:23.001511  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:23.001901  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.004872  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.005238  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.005269  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.005429  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.007776  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.008237  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.008260  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.008470  881469 provision.go:138] copyHostCerts
	I1114 16:14:23.008534  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 16:14:23.008559  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 16:14:23.008659  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 16:14:23.008811  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 16:14:23.008830  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 16:14:23.008881  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 16:14:23.008960  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 16:14:23.008970  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 16:14:23.009025  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 16:14:23.009094  881469 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.newest-cni-161256 san=[192.168.72.15 192.168.72.15 localhost 127.0.0.1 minikube newest-cni-161256]
	I1114 16:14:23.079504  881469 provision.go:172] copyRemoteCerts
	I1114 16:14:23.079572  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 16:14:23.079600  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.082584  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.082929  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.082976  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.083207  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.083372  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.083537  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.083692  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.179440  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 16:14:23.202630  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 16:14:23.226109  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 16:14:23.249807  881469 provision.go:86] duration metric: configureAuth took 248.303658ms
	I1114 16:14:23.249837  881469 buildroot.go:189] setting minikube options for container-runtime
	I1114 16:14:23.250074  881469 config.go:182] Loaded profile config "newest-cni-161256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:14:23.250179  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.253266  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.253742  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.253777  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.254015  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.254251  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.254401  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.254555  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.254745  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:23.255215  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:23.255246  881469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 16:14:23.578903  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 16:14:23.578934  881469 main.go:141] libmachine: Checking connection to Docker...
	I1114 16:14:23.578944  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetURL
	I1114 16:14:23.580328  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using libvirt version 6000000
	I1114 16:14:23.583089  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.583490  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.583521  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.583676  881469 main.go:141] libmachine: Docker is up and running!
	I1114 16:14:23.583692  881469 main.go:141] libmachine: Reticulating splines...
	I1114 16:14:23.583699  881469 client.go:171] LocalClient.Create took 25.702286469s
	I1114 16:14:23.583722  881469 start.go:167] duration metric: libmachine.API.Create for "newest-cni-161256" took 25.702360903s
	I1114 16:14:23.583734  881469 start.go:300] post-start starting for "newest-cni-161256" (driver="kvm2")
	I1114 16:14:23.583742  881469 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 16:14:23.583775  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.584090  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 16:14:23.584123  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.586647  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.586970  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.587000  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.587141  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.587285  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.587384  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.587503  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.678050  881469 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 16:14:23.682156  881469 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 16:14:23.682188  881469 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 16:14:23.682263  881469 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 16:14:23.682436  881469 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 16:14:23.682596  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 16:14:23.690851  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 16:14:23.716446  881469 start.go:303] post-start completed in 132.696208ms
	I1114 16:14:23.716505  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:14:23.717172  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.719919  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.720304  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.720331  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.720639  881469 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json ...
	I1114 16:14:23.720874  881469 start.go:128] duration metric: createHost completed in 25.857531002s
	I1114 16:14:23.720903  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.723370  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.723733  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.723760  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.723892  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.724103  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.724271  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.724405  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.724612  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:23.724962  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:23.724976  881469 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 16:14:23.849570  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699978463.832211650
	
	I1114 16:14:23.849596  881469 fix.go:206] guest clock: 1699978463.832211650
	I1114 16:14:23.849606  881469 fix.go:219] Guest: 2023-11-14 16:14:23.83221165 +0000 UTC Remote: 2023-11-14 16:14:23.720887486 +0000 UTC m=+25.991128135 (delta=111.324164ms)
	I1114 16:14:23.849673  881469 fix.go:190] guest clock delta is within tolerance: 111.324164ms
	I1114 16:14:23.849681  881469 start.go:83] releasing machines lock for "newest-cni-161256", held for 25.986446906s
	I1114 16:14:23.849727  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.850024  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.853811  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.854242  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.854267  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.854457  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.854929  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.855189  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.855341  881469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 16:14:23.855383  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.855472  881469 ssh_runner.go:195] Run: cat /version.json
	I1114 16:14:23.855501  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.858531  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.858707  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.858984  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.859019  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.859041  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.859056  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.859226  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.859241  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.859435  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.859451  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.859662  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.859667  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.859823  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.859823  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.947110  881469 ssh_runner.go:195] Run: systemctl --version
	I1114 16:14:23.975201  881469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 16:14:24.146755  881469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 16:14:24.153898  881469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 16:14:24.153973  881469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 16:14:24.170773  881469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 16:14:24.170798  881469 start.go:472] detecting cgroup driver to use...
	I1114 16:14:24.170898  881469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 16:14:24.184315  881469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 16:14:24.195742  881469 docker.go:203] disabling cri-docker service (if available) ...
	I1114 16:14:24.195812  881469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 16:14:24.208418  881469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 16:14:24.220829  881469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 16:14:24.326701  881469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 16:14:24.448062  881469 docker.go:219] disabling docker service ...
	I1114 16:14:24.448137  881469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 16:14:24.461347  881469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 16:14:24.474044  881469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 16:14:24.588367  881469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 16:14:24.706443  881469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 16:14:24.718562  881469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 16:14:24.736225  881469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 16:14:24.736304  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.745622  881469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 16:14:24.745695  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.754757  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.763742  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.773060  881469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 16:14:24.782622  881469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 16:14:24.790914  881469 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 16:14:24.790977  881469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 16:14:24.804357  881469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 16:14:24.815049  881469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 16:14:24.928182  881469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 16:14:25.100061  881469 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 16:14:25.100131  881469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 16:14:25.105250  881469 start.go:540] Will wait 60s for crictl version
	I1114 16:14:25.105312  881469 ssh_runner.go:195] Run: which crictl
	I1114 16:14:25.109193  881469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 16:14:25.154864  881469 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 16:14:25.154991  881469 ssh_runner.go:195] Run: crio --version
	I1114 16:14:25.203888  881469 ssh_runner.go:195] Run: crio --version
	I1114 16:14:25.253040  881469 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 16:14:25.254574  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:25.257607  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:25.258099  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:25.258150  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:25.258401  881469 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1114 16:14:25.264052  881469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 16:14:25.277627  881469 localpath.go:92] copying /home/jenkins/minikube-integration/17598-824991/.minikube/client.crt -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/client.crt
	I1114 16:14:25.277799  881469 localpath.go:117] copying /home/jenkins/minikube-integration/17598-824991/.minikube/client.key -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/client.key
	I1114 16:14:25.279677  881469 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1114 16:14:25.281088  881469 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 16:14:25.281156  881469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 16:14:25.316141  881469 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 16:14:25.316211  881469 ssh_runner.go:195] Run: which lz4
	I1114 16:14:25.320451  881469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 16:14:25.324701  881469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 16:14:25.324727  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 16:14:27.091079  881469 crio.go:444] Took 1.770662 seconds to copy over tarball
	I1114 16:14:27.091142  881469 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 16:14:30.274095  881469 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.182924074s)
	I1114 16:14:30.274128  881469 crio.go:451] Took 3.183016 seconds to extract the tarball
	I1114 16:14:30.274162  881469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 16:14:30.315918  881469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 16:14:30.396235  881469 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 16:14:30.396268  881469 cache_images.go:84] Images are preloaded, skipping loading
	I1114 16:14:30.396355  881469 ssh_runner.go:195] Run: crio config
	I1114 16:14:30.463805  881469 cni.go:84] Creating CNI manager for ""
	I1114 16:14:30.463836  881469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 16:14:30.463864  881469 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1114 16:14:30.463892  881469 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.15 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-161256 NodeName:newest-cni-161256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 16:14:30.464097  881469 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-161256"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 16:14:30.464228  881469 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-161256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 16:14:30.464308  881469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 16:14:30.476779  881469 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 16:14:30.476930  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 16:14:30.489819  881469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (413 bytes)
	I1114 16:14:30.508452  881469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 16:14:30.525450  881469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1114 16:14:30.542846  881469 ssh_runner.go:195] Run: grep 192.168.72.15	control-plane.minikube.internal$ /etc/hosts
	I1114 16:14:30.547157  881469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 16:14:30.559590  881469 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256 for IP: 192.168.72.15
	I1114 16:14:30.559622  881469 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:30.559781  881469 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 16:14:30.559823  881469 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 16:14:30.559968  881469 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/client.key
	I1114 16:14:30.559992  881469 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key.9d44ac2f
	I1114 16:14:30.560002  881469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt.9d44ac2f with IP's: [192.168.72.15 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 16:14:31.152006  881469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt.9d44ac2f ...
	I1114 16:14:31.152043  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt.9d44ac2f: {Name:mk7a0f8fd163798dba5b4bbaf0c798188857d61b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:31.152213  881469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key.9d44ac2f ...
	I1114 16:14:31.152229  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key.9d44ac2f: {Name:mk5bbdb8ba1400011f29179b852e9a76cd67f55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:31.152301  881469 certs.go:337] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt.9d44ac2f -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt
	I1114 16:14:31.152370  881469 certs.go:341] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key.9d44ac2f -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key
	I1114 16:14:31.152420  881469 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.key
	I1114 16:14:31.152440  881469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.crt with IP's: []
	I1114 16:14:31.399241  881469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.crt ...
	I1114 16:14:31.399276  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.crt: {Name:mkb08540938312209ab6b9e645f6fa4dce126237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:31.399445  881469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.key ...
	I1114 16:14:31.399463  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.key: {Name:mk5fa99c7428f44aea4a34e082153d46a09bd518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:31.399668  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 16:14:31.399726  881469 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 16:14:31.399744  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 16:14:31.399786  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 16:14:31.399823  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 16:14:31.399859  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 16:14:31.399915  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 16:14:31.400522  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 16:14:31.424820  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 16:14:31.447674  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 16:14:31.474195  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 16:14:31.503068  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 16:14:31.530742  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 16:14:31.555074  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 16:14:31.581597  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 16:14:31.608688  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 16:14:31.632328  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 16:14:31.656558  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 16:14:31.680968  881469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 16:14:31.699612  881469 ssh_runner.go:195] Run: openssl version
	I1114 16:14:31.705184  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 16:14:31.716933  881469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 16:14:31.721554  881469 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 16:14:31.721623  881469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 16:14:31.727428  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 16:14:31.739191  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 16:14:31.750863  881469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 16:14:31.755885  881469 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 16:14:31.755955  881469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 16:14:31.761852  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 16:14:31.772272  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 16:14:31.782987  881469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 16:14:31.787920  881469 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 16:14:31.787981  881469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 16:14:31.793991  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 16:14:31.806631  881469 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 16:14:31.811199  881469 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 16:14:31.811263  881469 kubeadm.go:404] StartCluster: {Name:newest-cni-161256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.15 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 16:14:31.811346  881469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 16:14:31.811420  881469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 16:14:31.858922  881469 cri.go:89] found id: ""
	I1114 16:14:31.859017  881469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 16:14:31.871806  881469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 16:14:31.882802  881469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 16:14:31.897862  881469 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 16:14:31.897907  881469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 16:14:32.010801  881469 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 16:14:32.010875  881469 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 16:14:32.271904  881469 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 16:14:32.272062  881469 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 16:14:32.272207  881469 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 16:14:32.514635  881469 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:53:28 UTC, ends at Tue 2023-11-14 16:14:33 UTC. --
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.126743623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978473126723098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bbda9090-b62f-43f3-9e04-ebb90a420223 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.127588611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dca96704-b042-4671-8ded-f8c93b80697a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.127680461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dca96704-b042-4671-8ded-f8c93b80697a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.127899523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992,PodSandboxId:a16a96152bc358a8c3fec8c6a96b5163e72e4b918e378bbf5334c6d87f6453ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977536643581968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3168b6ac-f288-4e1d-a4ce-78c4198debba,},Annotations:map[string]string{io.kubernetes.container.hash: 2276adff,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606,PodSandboxId:0cb501837f5b71df2a529b7e7f5653a541722785d0cad99aa8521ed5557f387d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977536201739520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdppd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddcb6130-1e2c-49b0-99de-b6b7d576d82c,},Annotations:map[string]string{io.kubernetes.container.hash: 965ba9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177,PodSandboxId:41e9a1ff99376bd5e3726daf30c53e821458b7b42570ce639fdedb3141cfae75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977535628469561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-42nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88175e14-09c2-4dc2-a56a-fa3bf71ae420,},Annotations:map[string]string{io.kubernetes.container.hash: fc333b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558,PodSandboxId:59f0ab2a002c1248a494bcd77c1280dc59b87b7cc8e4e8032acb7985faca402d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977512104150320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092ea65709ebacc65acf1f06e0b9e365,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66ab31e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2,PodSandboxId:64de30fc95549f64f97ef869e43fd4a8458b2f0dc661d89b6d7149e09066897f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977512035279064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63762a34480f9
0aab908464a95fb4a2d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,PodSandboxId:1c5eea2f27aa40f6ba9e2f627a3bae9cc96a6f789fd720bf07af02e508fe7323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977511813975185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e4f62415f16dde270e802
807238601,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,PodSandboxId:4073e91be8f5a881049f4ed66d6a4e52ee84b1a1b84b6599aaf2245e6d7eb6d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977511687168501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26039813275e3110d741b46c8b90541,
},Annotations:map[string]string{io.kubernetes.container.hash: 996cc199,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dca96704-b042-4671-8ded-f8c93b80697a name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.168155949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f1fe58fe-a148-44c9-af24-88212d1b3c75 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.168213816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f1fe58fe-a148-44c9-af24-88212d1b3c75 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.169859531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=db895974-8860-4433-ba77-0a2b102fb901 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.170306705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978473170285950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=db895974-8860-4433-ba77-0a2b102fb901 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.170879838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=970c91e1-5d8e-494f-ac94-445389c6e988 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.170963118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=970c91e1-5d8e-494f-ac94-445389c6e988 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.171182149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992,PodSandboxId:a16a96152bc358a8c3fec8c6a96b5163e72e4b918e378bbf5334c6d87f6453ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977536643581968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3168b6ac-f288-4e1d-a4ce-78c4198debba,},Annotations:map[string]string{io.kubernetes.container.hash: 2276adff,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606,PodSandboxId:0cb501837f5b71df2a529b7e7f5653a541722785d0cad99aa8521ed5557f387d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977536201739520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdppd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddcb6130-1e2c-49b0-99de-b6b7d576d82c,},Annotations:map[string]string{io.kubernetes.container.hash: 965ba9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177,PodSandboxId:41e9a1ff99376bd5e3726daf30c53e821458b7b42570ce639fdedb3141cfae75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977535628469561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-42nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88175e14-09c2-4dc2-a56a-fa3bf71ae420,},Annotations:map[string]string{io.kubernetes.container.hash: fc333b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558,PodSandboxId:59f0ab2a002c1248a494bcd77c1280dc59b87b7cc8e4e8032acb7985faca402d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977512104150320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092ea65709ebacc65acf1f06e0b9e365,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66ab31e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2,PodSandboxId:64de30fc95549f64f97ef869e43fd4a8458b2f0dc661d89b6d7149e09066897f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977512035279064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63762a34480f9
0aab908464a95fb4a2d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,PodSandboxId:1c5eea2f27aa40f6ba9e2f627a3bae9cc96a6f789fd720bf07af02e508fe7323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977511813975185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e4f62415f16dde270e802
807238601,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,PodSandboxId:4073e91be8f5a881049f4ed66d6a4e52ee84b1a1b84b6599aaf2245e6d7eb6d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977511687168501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26039813275e3110d741b46c8b90541,
},Annotations:map[string]string{io.kubernetes.container.hash: 996cc199,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=970c91e1-5d8e-494f-ac94-445389c6e988 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.211888251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5e8f88c9-c5fd-4322-b1e9-8324ff5803a5 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.211979602Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5e8f88c9-c5fd-4322-b1e9-8324ff5803a5 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.213301795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9b996e29-b5c3-437f-b343-b9a3043652fd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.213768039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978473213752696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9b996e29-b5c3-437f-b343-b9a3043652fd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.214396837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=24e8df5f-764b-4895-94c0-407488d05338 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.214530919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=24e8df5f-764b-4895-94c0-407488d05338 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.214732068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992,PodSandboxId:a16a96152bc358a8c3fec8c6a96b5163e72e4b918e378bbf5334c6d87f6453ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977536643581968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3168b6ac-f288-4e1d-a4ce-78c4198debba,},Annotations:map[string]string{io.kubernetes.container.hash: 2276adff,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606,PodSandboxId:0cb501837f5b71df2a529b7e7f5653a541722785d0cad99aa8521ed5557f387d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977536201739520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdppd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddcb6130-1e2c-49b0-99de-b6b7d576d82c,},Annotations:map[string]string{io.kubernetes.container.hash: 965ba9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177,PodSandboxId:41e9a1ff99376bd5e3726daf30c53e821458b7b42570ce639fdedb3141cfae75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977535628469561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-42nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88175e14-09c2-4dc2-a56a-fa3bf71ae420,},Annotations:map[string]string{io.kubernetes.container.hash: fc333b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558,PodSandboxId:59f0ab2a002c1248a494bcd77c1280dc59b87b7cc8e4e8032acb7985faca402d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977512104150320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092ea65709ebacc65acf1f06e0b9e365,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66ab31e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2,PodSandboxId:64de30fc95549f64f97ef869e43fd4a8458b2f0dc661d89b6d7149e09066897f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977512035279064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63762a34480f9
0aab908464a95fb4a2d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,PodSandboxId:1c5eea2f27aa40f6ba9e2f627a3bae9cc96a6f789fd720bf07af02e508fe7323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977511813975185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e4f62415f16dde270e802
807238601,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,PodSandboxId:4073e91be8f5a881049f4ed66d6a4e52ee84b1a1b84b6599aaf2245e6d7eb6d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977511687168501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26039813275e3110d741b46c8b90541,
},Annotations:map[string]string{io.kubernetes.container.hash: 996cc199,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=24e8df5f-764b-4895-94c0-407488d05338 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.256146793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51b9e198-82f1-4fb3-b17f-26c23955419a name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.256232893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51b9e198-82f1-4fb3-b17f-26c23955419a name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.257724987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b3d981af-bfd7-4d86-b10f-4786c9c8e8e6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.258355109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978473258338313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b3d981af-bfd7-4d86-b10f-4786c9c8e8e6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.259115823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f251061-7c82-4811-91cf-05e7568d38b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.259161242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f251061-7c82-4811-91cf-05e7568d38b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:33 embed-certs-279880 crio[707]: time="2023-11-14 16:14:33.259401091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992,PodSandboxId:a16a96152bc358a8c3fec8c6a96b5163e72e4b918e378bbf5334c6d87f6453ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977536643581968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3168b6ac-f288-4e1d-a4ce-78c4198debba,},Annotations:map[string]string{io.kubernetes.container.hash: 2276adff,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606,PodSandboxId:0cb501837f5b71df2a529b7e7f5653a541722785d0cad99aa8521ed5557f387d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977536201739520,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdppd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddcb6130-1e2c-49b0-99de-b6b7d576d82c,},Annotations:map[string]string{io.kubernetes.container.hash: 965ba9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177,PodSandboxId:41e9a1ff99376bd5e3726daf30c53e821458b7b42570ce639fdedb3141cfae75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977535628469561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-42nzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88175e14-09c2-4dc2-a56a-fa3bf71ae420,},Annotations:map[string]string{io.kubernetes.container.hash: fc333b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558,PodSandboxId:59f0ab2a002c1248a494bcd77c1280dc59b87b7cc8e4e8032acb7985faca402d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977512104150320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092ea65709ebacc65acf1f06e0b9e365,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66ab31e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2,PodSandboxId:64de30fc95549f64f97ef869e43fd4a8458b2f0dc661d89b6d7149e09066897f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977512035279064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63762a34480f9
0aab908464a95fb4a2d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df,PodSandboxId:1c5eea2f27aa40f6ba9e2f627a3bae9cc96a6f789fd720bf07af02e508fe7323,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977511813975185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e4f62415f16dde270e802
807238601,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b,PodSandboxId:4073e91be8f5a881049f4ed66d6a4e52ee84b1a1b84b6599aaf2245e6d7eb6d0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977511687168501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-279880,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26039813275e3110d741b46c8b90541,
},Annotations:map[string]string{io.kubernetes.container.hash: 996cc199,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f251061-7c82-4811-91cf-05e7568d38b8 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fe9f08afebe6e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   a16a96152bc35       storage-provisioner
	9cae5d2c2a9eb       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   15 minutes ago      Running             kube-proxy                0                   0cb501837f5b7       kube-proxy-qdppd
	9257697dbd32b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   41e9a1ff99376       coredns-5dd5756b68-42nzn
	b28ed4dcfc30b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   59f0ab2a002c1       etcd-embed-certs-279880
	605fd09539313       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   16 minutes ago      Running             kube-controller-manager   2                   64de30fc95549       kube-controller-manager-embed-certs-279880
	7a97f16105c7a       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   16 minutes ago      Running             kube-scheduler            2                   1c5eea2f27aa4       kube-scheduler-embed-certs-279880
	12a4ab719e119       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   16 minutes ago      Running             kube-apiserver            2                   4073e91be8f5a       kube-apiserver-embed-certs-279880
	
	* 
	* ==> coredns [9257697dbd32b9f5c94ecc91c54f6e2a54702d2b050b24df619b2adc5e3ae177] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33459 - 63391 "HINFO IN 2980470950394339585.3559220984865409200. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011629594s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-279880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-279880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=embed-certs-279880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_58_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:58:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-279880
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 16:14:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 16:14:21 +0000   Tue, 14 Nov 2023 15:58:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 16:14:21 +0000   Tue, 14 Nov 2023 15:58:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 16:14:21 +0000   Tue, 14 Nov 2023 15:58:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 16:14:21 +0000   Tue, 14 Nov 2023 15:58:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    embed-certs-279880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2367ca900cfb4b1c89db78f52091f224
	  System UUID:                2367ca90-0cfb-4b1c-89db-78f52091f224
	  Boot ID:                    6a108333-9860-4bde-910b-df6c310bed4c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-42nzn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-279880                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-279880             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-279880    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-qdppd                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-279880             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-g5wh5               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-279880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-279880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-279880 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-279880 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-279880 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-279880 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-279880 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node embed-certs-279880 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-279880 event: Registered Node embed-certs-279880 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov14 15:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068483] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.312848] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.213967] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.137969] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.444257] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.799256] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.113679] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.150432] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.118438] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.227506] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[ +17.218420] systemd-fstab-generator[906]: Ignoring "noauto" for root device
	[Nov14 15:54] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.048681] hrtimer: interrupt took 7368940 ns
	[Nov14 15:58] systemd-fstab-generator[3477]: Ignoring "noauto" for root device
	[  +9.837185] systemd-fstab-generator[3805]: Ignoring "noauto" for root device
	[ +12.876350] kauditd_printk_skb: 2 callbacks suppressed
	[Nov14 15:59] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [b28ed4dcfc30be14f62ee032493f7757abe6210167922d796fddd556e12b0558] <==
	* {"level":"info","ts":"2023-11-14T15:58:34.24575Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2023-11-14T15:58:34.917372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-14T15:58:34.917525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-14T15:58:34.917544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgPreVoteResp from c194f0f1585e7a7d at term 1"}
	{"level":"info","ts":"2023-11-14T15:58:34.917557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:58:34.917564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d received MsgVoteResp from c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2023-11-14T15:58:34.917573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d became leader at term 2"}
	{"level":"info","ts":"2023-11-14T15:58:34.917581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c194f0f1585e7a7d elected leader c194f0f1585e7a7d at term 2"}
	{"level":"info","ts":"2023-11-14T15:58:34.91921Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c194f0f1585e7a7d","local-member-attributes":"{Name:embed-certs-279880 ClientURLs:[https://192.168.39.147:2379]}","request-path":"/0/members/c194f0f1585e7a7d/attributes","cluster-id":"582b8c8375119e1d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:58:34.919542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:58:34.920468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:58:34.920953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.147:2379"}
	{"level":"info","ts":"2023-11-14T15:58:34.921092Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:58:34.921405Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:58:34.922327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:58:34.925737Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:58:34.922943Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:58:34.925863Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:58:34.925915Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T16:08:34.968393Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":730}
	{"level":"info","ts":"2023-11-14T16:08:34.97592Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":730,"took":"6.682158ms","hash":4294546357}
	{"level":"info","ts":"2023-11-14T16:08:34.976098Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4294546357,"revision":730,"compact-revision":-1}
	{"level":"info","ts":"2023-11-14T16:13:34.977139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2023-11-14T16:13:34.979348Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":973,"took":"1.498597ms","hash":3105442736}
	{"level":"info","ts":"2023-11-14T16:13:34.97952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3105442736,"revision":973,"compact-revision":730}
	
	* 
	* ==> kernel <==
	*  16:14:34 up 21 min,  0 users,  load average: 0.09, 0.36, 0.37
	Linux embed-certs-279880 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [12a4ab719e1196005ec347ada5bc682a4c077bcc86479cae34ee93162895739b] <==
	* E1114 16:09:37.594493       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:09:37.594536       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:10:36.445393       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 16:11:36.443798       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:11:37.593246       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:11:37.593518       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:11:37.593592       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:11:37.595577       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:11:37.595735       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:11:37.595796       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:12:36.444258       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 16:13:36.443279       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:13:36.597400       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:13:36.597618       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:13:36.598154       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:13:37.598775       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:13:37.599021       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:13:37.599131       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:13:37.598932       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:13:37.599286       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:13:37.601216       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [605fd09539313978e3b991c4e1254984fb76f4f33a0c5101edfb77f0dccd68a2] <==
	* I1114 16:08:52.386984       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:09:21.848811       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:09:22.395788       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:09:51.858360       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:09:52.405691       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:09:58.296246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="166.198µs"
	I1114 16:10:09.297318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="345µs"
	E1114 16:10:21.869915       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:10:22.418384       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:10:51.877501       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:10:52.429059       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:11:21.883256       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:11:22.438103       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:11:51.889501       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:11:52.449373       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:12:21.897393       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:12:22.459758       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:12:51.903824       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:12:52.469375       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:13:21.910220       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:13:22.478380       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:13:51.917199       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:13:52.486767       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:14:21.926498       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:14:22.499931       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [9cae5d2c2a9ebf19cb46e205e136ba531c7012883b826949a5bfedb33de30606] <==
	* I1114 15:58:56.930050       1 server_others.go:69] "Using iptables proxy"
	I1114 15:58:56.946869       1 node.go:141] Successfully retrieved node IP: 192.168.39.147
	I1114 15:58:57.002578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 15:58:57.002622       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 15:58:57.005255       1 server_others.go:152] "Using iptables Proxier"
	I1114 15:58:57.005579       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 15:58:57.005995       1 server.go:846] "Version info" version="v1.28.3"
	I1114 15:58:57.006185       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:58:57.008139       1 config.go:188] "Starting service config controller"
	I1114 15:58:57.008756       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 15:58:57.008829       1 config.go:97] "Starting endpoint slice config controller"
	I1114 15:58:57.008838       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 15:58:57.010855       1 config.go:315] "Starting node config controller"
	I1114 15:58:57.010896       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 15:58:57.109914       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 15:58:57.110024       1 shared_informer.go:318] Caches are synced for service config
	I1114 15:58:57.111633       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7a97f16105c7a5d834003882f00f751e9cfd77f196e7a832c91132df2d56b0df] <==
	* E1114 15:58:36.698498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1114 15:58:36.698536       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:36.698544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 15:58:36.698553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 15:58:36.698561       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 15:58:36.698073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:36.698782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:36.698154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1114 15:58:37.563704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:37.563757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 15:58:37.602978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 15:58:37.603119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1114 15:58:37.642019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 15:58:37.642115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 15:58:37.769824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 15:58:37.769903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 15:58:37.775526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 15:58:37.775592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1114 15:58:37.787183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 15:58:37.787249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 15:58:37.790755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 15:58:37.790822       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1114 15:58:37.995874       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 15:58:37.995958       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1114 15:58:41.155064       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:53:28 UTC, ends at Tue 2023-11-14 16:14:34 UTC. --
	Nov 14 16:11:40 embed-certs-279880 kubelet[3812]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:11:40 embed-certs-279880 kubelet[3812]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:11:40 embed-certs-279880 kubelet[3812]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:11:48 embed-certs-279880 kubelet[3812]: E1114 16:11:48.276692    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:12:03 embed-certs-279880 kubelet[3812]: E1114 16:12:03.275687    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:12:16 embed-certs-279880 kubelet[3812]: E1114 16:12:16.278779    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:12:27 embed-certs-279880 kubelet[3812]: E1114 16:12:27.275653    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:12:40 embed-certs-279880 kubelet[3812]: E1114 16:12:40.276193    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:12:40 embed-certs-279880 kubelet[3812]: E1114 16:12:40.306338    3812 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:12:40 embed-certs-279880 kubelet[3812]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:12:40 embed-certs-279880 kubelet[3812]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:12:40 embed-certs-279880 kubelet[3812]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:12:51 embed-certs-279880 kubelet[3812]: E1114 16:12:51.275839    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:13:02 embed-certs-279880 kubelet[3812]: E1114 16:13:02.275721    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:13:17 embed-certs-279880 kubelet[3812]: E1114 16:13:17.276278    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:13:28 embed-certs-279880 kubelet[3812]: E1114 16:13:28.275800    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:13:40 embed-certs-279880 kubelet[3812]: E1114 16:13:40.302796    3812 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:13:40 embed-certs-279880 kubelet[3812]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:13:40 embed-certs-279880 kubelet[3812]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:13:40 embed-certs-279880 kubelet[3812]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:13:40 embed-certs-279880 kubelet[3812]: E1114 16:13:40.388308    3812 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Nov 14 16:13:43 embed-certs-279880 kubelet[3812]: E1114 16:13:43.275731    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:13:54 embed-certs-279880 kubelet[3812]: E1114 16:13:54.277360    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:14:09 embed-certs-279880 kubelet[3812]: E1114 16:14:09.277104    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	Nov 14 16:14:23 embed-certs-279880 kubelet[3812]: E1114 16:14:23.276551    3812 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-g5wh5" podUID="e51d7d56-4203-404c-ac65-4b1e65ac4ad3"
	
	* 
	* ==> storage-provisioner [fe9f08afebe6e35bd60f1e32a5e8cb8b8b0635bb3575ae8d7a1a7b7df44ca992] <==
	* I1114 15:58:56.822925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 15:58:56.850562       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 15:58:56.850690       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 15:58:56.879526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 15:58:56.881128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-279880_a5f186f8-8d31-4b40-8055-1e958bef9301!
	I1114 15:58:56.882738       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5b2292e6-be29-4fb5-a8ce-24e3188549d9", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-279880_a5f186f8-8d31-4b40-8055-1e958bef9301 became leader
	I1114 15:58:56.981926       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-279880_a5f186f8-8d31-4b40-8055-1e958bef9301!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-279880 -n embed-certs-279880
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-279880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-g5wh5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-279880 describe pod metrics-server-57f55c9bc5-g5wh5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-279880 describe pod metrics-server-57f55c9bc5-g5wh5: exit status 1 (68.750881ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-g5wh5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-279880 describe pod metrics-server-57f55c9bc5-g5wh5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (392.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (414.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1114 16:08:22.912576  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 16:08:48.691370  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 16:08:52.668924  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 16:08:53.607421  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-14 16:15:11.012055338 +0000 UTC m=+5778.542240293
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-529430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-529430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.159µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-529430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-529430 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-529430 logs -n 25: (1.311444795s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-331502 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | disable-driver-mounts-331502                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:47 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-490998             | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-279880            | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-842105        | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-529430  | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC | 14 Nov 23 15:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC |                     |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-490998                  | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-279880                 | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 15:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-842105             | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-529430       | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 15:59 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 16:13 UTC | 14 Nov 23 16:13 UTC |
	| start   | -p newest-cni-161256 --memory=2200 --alsologtostderr   | newest-cni-161256            | jenkins | v1.32.0 | 14 Nov 23 16:13 UTC | 14 Nov 23 16:14 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 16:14 UTC | 14 Nov 23 16:14 UTC |
	| delete  | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 16:14 UTC | 14 Nov 23 16:14 UTC |
	| addons  | enable metrics-server -p newest-cni-161256             | newest-cni-161256            | jenkins | v1.32.0 | 14 Nov 23 16:14 UTC | 14 Nov 23 16:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-161256                                   | newest-cni-161256            | jenkins | v1.32.0 | 14 Nov 23 16:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 16:13:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 16:13:57.784836  881469 out.go:296] Setting OutFile to fd 1 ...
	I1114 16:13:57.785128  881469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 16:13:57.785138  881469 out.go:309] Setting ErrFile to fd 2...
	I1114 16:13:57.785146  881469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 16:13:57.785348  881469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 16:13:57.785980  881469 out.go:303] Setting JSON to false
	I1114 16:13:57.787108  881469 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":46590,"bootTime":1699931848,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 16:13:57.787173  881469 start.go:138] virtualization: kvm guest
	I1114 16:13:57.789820  881469 out.go:177] * [newest-cni-161256] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 16:13:57.791257  881469 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 16:13:57.791324  881469 notify.go:220] Checking for updates...
	I1114 16:13:57.792683  881469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 16:13:57.794219  881469 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:13:57.795667  881469 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:57.797148  881469 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 16:13:57.798544  881469 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 16:13:57.800427  881469 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800574  881469 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800696  881469 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800869  881469 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 16:13:57.840976  881469 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 16:13:57.842309  881469 start.go:298] selected driver: kvm2
	I1114 16:13:57.842324  881469 start.go:902] validating driver "kvm2" against <nil>
	I1114 16:13:57.842335  881469 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 16:13:57.843244  881469 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 16:13:57.843340  881469 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 16:13:57.858215  881469 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 16:13:57.858276  881469 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1114 16:13:57.858298  881469 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1114 16:13:57.858505  881469 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1114 16:13:57.858616  881469 cni.go:84] Creating CNI manager for ""
	I1114 16:13:57.858636  881469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 16:13:57.858647  881469 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1114 16:13:57.858656  881469 start_flags.go:323] config:
	{Name:newest-cni-161256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 16:13:57.858813  881469 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 16:13:57.861088  881469 out.go:177] * Starting control plane node newest-cni-161256 in cluster newest-cni-161256
	I1114 16:13:57.862595  881469 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 16:13:57.862632  881469 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 16:13:57.862691  881469 cache.go:56] Caching tarball of preloaded images
	I1114 16:13:57.862796  881469 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 16:13:57.862812  881469 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 16:13:57.862916  881469 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json ...
	I1114 16:13:57.862949  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json: {Name:mka288a2361f2be2d9a752ce4e344331e93a7d9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:13:57.863168  881469 start.go:365] acquiring machines lock for newest-cni-161256: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 16:13:57.863222  881469 start.go:369] acquired machines lock for "newest-cni-161256" in 33.515µs
	I1114 16:13:57.863248  881469 start.go:93] Provisioning new machine with config: &{Name:newest-cni-161256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:13:57.863331  881469 start.go:125] createHost starting for "" (driver="kvm2")
	I1114 16:13:57.865053  881469 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1114 16:13:57.865182  881469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:13:57.865231  881469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:13:57.879338  881469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I1114 16:13:57.879746  881469 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:13:57.880279  881469 main.go:141] libmachine: Using API Version  1
	I1114 16:13:57.880306  881469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:13:57.880723  881469 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:13:57.880962  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:13:57.881183  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:13:57.881364  881469 start.go:159] libmachine.API.Create for "newest-cni-161256" (driver="kvm2")
	I1114 16:13:57.881402  881469 client.go:168] LocalClient.Create starting
	I1114 16:13:57.881465  881469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 16:13:57.881513  881469 main.go:141] libmachine: Decoding PEM data...
	I1114 16:13:57.881534  881469 main.go:141] libmachine: Parsing certificate...
	I1114 16:13:57.881631  881469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 16:13:57.881666  881469 main.go:141] libmachine: Decoding PEM data...
	I1114 16:13:57.881685  881469 main.go:141] libmachine: Parsing certificate...
	I1114 16:13:57.881723  881469 main.go:141] libmachine: Running pre-create checks...
	I1114 16:13:57.881758  881469 main.go:141] libmachine: (newest-cni-161256) Calling .PreCreateCheck
	I1114 16:13:57.882257  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:13:57.882866  881469 main.go:141] libmachine: Creating machine...
	I1114 16:13:57.882890  881469 main.go:141] libmachine: (newest-cni-161256) Calling .Create
	I1114 16:13:57.883081  881469 main.go:141] libmachine: (newest-cni-161256) Creating KVM machine...
	I1114 16:13:57.884479  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found existing default KVM network
	I1114 16:13:57.885821  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.885625  881491 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7a:f8:83} reservation:<nil>}
	I1114 16:13:57.886569  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.886459  881491 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:58:bc} reservation:<nil>}
	I1114 16:13:57.887505  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.887399  881491 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:64:42} reservation:<nil>}
	I1114 16:13:57.888668  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.888578  881491 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e7120}
	I1114 16:13:57.894409  881469 main.go:141] libmachine: (newest-cni-161256) DBG | trying to create private KVM network mk-newest-cni-161256 192.168.72.0/24...
	I1114 16:13:57.973147  881469 main.go:141] libmachine: (newest-cni-161256) DBG | private KVM network mk-newest-cni-161256 192.168.72.0/24 created
	I1114 16:13:57.973201  881469 main.go:141] libmachine: (newest-cni-161256) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 ...
	I1114 16:13:57.973221  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.973079  881491 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:57.973318  881469 main.go:141] libmachine: (newest-cni-161256) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 16:13:57.973397  881469 main.go:141] libmachine: (newest-cni-161256) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 16:13:58.236968  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.236841  881491 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa...
	I1114 16:13:58.389420  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.389261  881491 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/newest-cni-161256.rawdisk...
	I1114 16:13:58.389453  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Writing magic tar header
	I1114 16:13:58.389471  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Writing SSH key tar header
	I1114 16:13:58.389480  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.389421  881491 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 ...
	I1114 16:13:58.389546  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256
	I1114 16:13:58.389602  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 (perms=drwx------)
	I1114 16:13:58.389630  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 16:13:58.389644  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 16:13:58.389655  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:58.389680  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 16:13:58.389693  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 16:13:58.389704  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 16:13:58.389718  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 16:13:58.389785  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 16:13:58.389810  881469 main.go:141] libmachine: (newest-cni-161256) Creating domain...
	I1114 16:13:58.389826  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 16:13:58.389844  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins
	I1114 16:13:58.389857  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home
	I1114 16:13:58.389872  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Skipping /home - not owner
	I1114 16:13:58.391223  881469 main.go:141] libmachine: (newest-cni-161256) define libvirt domain using xml: 
	I1114 16:13:58.391256  881469 main.go:141] libmachine: (newest-cni-161256) <domain type='kvm'>
	I1114 16:13:58.391270  881469 main.go:141] libmachine: (newest-cni-161256)   <name>newest-cni-161256</name>
	I1114 16:13:58.391280  881469 main.go:141] libmachine: (newest-cni-161256)   <memory unit='MiB'>2200</memory>
	I1114 16:13:58.391330  881469 main.go:141] libmachine: (newest-cni-161256)   <vcpu>2</vcpu>
	I1114 16:13:58.391364  881469 main.go:141] libmachine: (newest-cni-161256)   <features>
	I1114 16:13:58.391375  881469 main.go:141] libmachine: (newest-cni-161256)     <acpi/>
	I1114 16:13:58.391383  881469 main.go:141] libmachine: (newest-cni-161256)     <apic/>
	I1114 16:13:58.391392  881469 main.go:141] libmachine: (newest-cni-161256)     <pae/>
	I1114 16:13:58.391406  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.391420  881469 main.go:141] libmachine: (newest-cni-161256)   </features>
	I1114 16:13:58.391434  881469 main.go:141] libmachine: (newest-cni-161256)   <cpu mode='host-passthrough'>
	I1114 16:13:58.391461  881469 main.go:141] libmachine: (newest-cni-161256)   
	I1114 16:13:58.391472  881469 main.go:141] libmachine: (newest-cni-161256)   </cpu>
	I1114 16:13:58.391487  881469 main.go:141] libmachine: (newest-cni-161256)   <os>
	I1114 16:13:58.391502  881469 main.go:141] libmachine: (newest-cni-161256)     <type>hvm</type>
	I1114 16:13:58.391517  881469 main.go:141] libmachine: (newest-cni-161256)     <boot dev='cdrom'/>
	I1114 16:13:58.391528  881469 main.go:141] libmachine: (newest-cni-161256)     <boot dev='hd'/>
	I1114 16:13:58.391538  881469 main.go:141] libmachine: (newest-cni-161256)     <bootmenu enable='no'/>
	I1114 16:13:58.391549  881469 main.go:141] libmachine: (newest-cni-161256)   </os>
	I1114 16:13:58.391561  881469 main.go:141] libmachine: (newest-cni-161256)   <devices>
	I1114 16:13:58.391572  881469 main.go:141] libmachine: (newest-cni-161256)     <disk type='file' device='cdrom'>
	I1114 16:13:58.391609  881469 main.go:141] libmachine: (newest-cni-161256)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/boot2docker.iso'/>
	I1114 16:13:58.391636  881469 main.go:141] libmachine: (newest-cni-161256)       <target dev='hdc' bus='scsi'/>
	I1114 16:13:58.391662  881469 main.go:141] libmachine: (newest-cni-161256)       <readonly/>
	I1114 16:13:58.391680  881469 main.go:141] libmachine: (newest-cni-161256)     </disk>
	I1114 16:13:58.391697  881469 main.go:141] libmachine: (newest-cni-161256)     <disk type='file' device='disk'>
	I1114 16:13:58.391712  881469 main.go:141] libmachine: (newest-cni-161256)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 16:13:58.391744  881469 main.go:141] libmachine: (newest-cni-161256)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/newest-cni-161256.rawdisk'/>
	I1114 16:13:58.391763  881469 main.go:141] libmachine: (newest-cni-161256)       <target dev='hda' bus='virtio'/>
	I1114 16:13:58.391776  881469 main.go:141] libmachine: (newest-cni-161256)     </disk>
	I1114 16:13:58.391792  881469 main.go:141] libmachine: (newest-cni-161256)     <interface type='network'>
	I1114 16:13:58.391809  881469 main.go:141] libmachine: (newest-cni-161256)       <source network='mk-newest-cni-161256'/>
	I1114 16:13:58.391822  881469 main.go:141] libmachine: (newest-cni-161256)       <model type='virtio'/>
	I1114 16:13:58.391849  881469 main.go:141] libmachine: (newest-cni-161256)     </interface>
	I1114 16:13:58.391876  881469 main.go:141] libmachine: (newest-cni-161256)     <interface type='network'>
	I1114 16:13:58.391892  881469 main.go:141] libmachine: (newest-cni-161256)       <source network='default'/>
	I1114 16:13:58.391904  881469 main.go:141] libmachine: (newest-cni-161256)       <model type='virtio'/>
	I1114 16:13:58.391918  881469 main.go:141] libmachine: (newest-cni-161256)     </interface>
	I1114 16:13:58.391929  881469 main.go:141] libmachine: (newest-cni-161256)     <serial type='pty'>
	I1114 16:13:58.391939  881469 main.go:141] libmachine: (newest-cni-161256)       <target port='0'/>
	I1114 16:13:58.391951  881469 main.go:141] libmachine: (newest-cni-161256)     </serial>
	I1114 16:13:58.391977  881469 main.go:141] libmachine: (newest-cni-161256)     <console type='pty'>
	I1114 16:13:58.391998  881469 main.go:141] libmachine: (newest-cni-161256)       <target type='serial' port='0'/>
	I1114 16:13:58.392013  881469 main.go:141] libmachine: (newest-cni-161256)     </console>
	I1114 16:13:58.392024  881469 main.go:141] libmachine: (newest-cni-161256)     <rng model='virtio'>
	I1114 16:13:58.392038  881469 main.go:141] libmachine: (newest-cni-161256)       <backend model='random'>/dev/random</backend>
	I1114 16:13:58.392049  881469 main.go:141] libmachine: (newest-cni-161256)     </rng>
	I1114 16:13:58.392061  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.392074  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.392086  881469 main.go:141] libmachine: (newest-cni-161256)   </devices>
	I1114 16:13:58.392101  881469 main.go:141] libmachine: (newest-cni-161256) </domain>
	I1114 16:13:58.392123  881469 main.go:141] libmachine: (newest-cni-161256) 
	I1114 16:13:58.397370  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:8b:ec:96 in network default
	I1114 16:13:58.398066  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring networks are active...
	I1114 16:13:58.398113  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:13:58.398797  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring network default is active
	I1114 16:13:58.399287  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring network mk-newest-cni-161256 is active
	I1114 16:13:58.399958  881469 main.go:141] libmachine: (newest-cni-161256) Getting domain xml...
	I1114 16:13:58.400849  881469 main.go:141] libmachine: (newest-cni-161256) Creating domain...
	I1114 16:13:59.726283  881469 main.go:141] libmachine: (newest-cni-161256) Waiting to get IP...
	I1114 16:13:59.727449  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:13:59.727962  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:13:59.727986  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:59.727939  881491 retry.go:31] will retry after 279.361106ms: waiting for machine to come up
	I1114 16:14:00.009714  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.010197  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.010237  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.010159  881491 retry.go:31] will retry after 359.592157ms: waiting for machine to come up
	I1114 16:14:00.372007  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.372590  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.372624  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.372515  881491 retry.go:31] will retry after 324.730593ms: waiting for machine to come up
	I1114 16:14:00.698994  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.699575  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.699610  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.699489  881491 retry.go:31] will retry after 476.141432ms: waiting for machine to come up
	I1114 16:14:01.177324  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:01.177753  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:01.177783  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:01.177714  881491 retry.go:31] will retry after 693.627681ms: waiting for machine to come up
	I1114 16:14:01.872724  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:01.873311  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:01.873346  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:01.873237  881491 retry.go:31] will retry after 922.207125ms: waiting for machine to come up
	I1114 16:14:02.796995  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:02.797487  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:02.797515  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:02.797447  881491 retry.go:31] will retry after 828.947009ms: waiting for machine to come up
	I1114 16:14:03.627753  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:03.628173  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:03.628210  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:03.628118  881491 retry.go:31] will retry after 997.915404ms: waiting for machine to come up
	I1114 16:14:04.627128  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:04.627568  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:04.627602  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:04.627510  881491 retry.go:31] will retry after 1.497303924s: waiting for machine to come up
	I1114 16:14:06.126245  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:06.126708  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:06.126773  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:06.126683  881491 retry.go:31] will retry after 2.041273523s: waiting for machine to come up
	I1114 16:14:08.169598  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:08.170190  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:08.170229  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:08.170121  881491 retry.go:31] will retry after 1.842095296s: waiting for machine to come up
	I1114 16:14:10.015052  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:10.015611  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:10.015646  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:10.015549  881491 retry.go:31] will retry after 2.927670132s: waiting for machine to come up
	I1114 16:14:12.944720  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:12.945324  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:12.945360  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:12.945263  881491 retry.go:31] will retry after 3.702057643s: waiting for machine to come up
	I1114 16:14:16.650490  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:16.650958  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:16.650990  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:16.650908  881491 retry.go:31] will retry after 5.604460167s: waiting for machine to come up
	I1114 16:14:22.258010  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.258475  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has current primary IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.258533  881469 main.go:141] libmachine: (newest-cni-161256) Found IP for machine: 192.168.72.15
	I1114 16:14:22.258560  881469 main.go:141] libmachine: (newest-cni-161256) Reserving static IP address...
	I1114 16:14:22.258936  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find host DHCP lease matching {name: "newest-cni-161256", mac: "52:54:00:06:29:44", ip: "192.168.72.15"} in network mk-newest-cni-161256
	I1114 16:14:22.344546  881469 main.go:141] libmachine: (newest-cni-161256) Reserved static IP address: 192.168.72.15
	I1114 16:14:22.344599  881469 main.go:141] libmachine: (newest-cni-161256) Waiting for SSH to be available...
	I1114 16:14:22.344611  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Getting to WaitForSSH function...
	I1114 16:14:22.347942  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.348375  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.348409  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.348585  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using SSH client type: external
	I1114 16:14:22.348616  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa (-rw-------)
	I1114 16:14:22.348666  881469 main.go:141] libmachine: (newest-cni-161256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 16:14:22.348685  881469 main.go:141] libmachine: (newest-cni-161256) DBG | About to run SSH command:
	I1114 16:14:22.348794  881469 main.go:141] libmachine: (newest-cni-161256) DBG | exit 0
	I1114 16:14:22.444878  881469 main.go:141] libmachine: (newest-cni-161256) DBG | SSH cmd err, output: <nil>: 
	I1114 16:14:22.445251  881469 main.go:141] libmachine: (newest-cni-161256) KVM machine creation complete!
	I1114 16:14:22.445546  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:14:22.446255  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:22.446483  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:22.446698  881469 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 16:14:22.446723  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetState
	I1114 16:14:22.448178  881469 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 16:14:22.448199  881469 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 16:14:22.448209  881469 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 16:14:22.448240  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.451143  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.451592  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.451626  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.451815  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.452017  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.452188  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.452378  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.452632  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.453178  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.453198  881469 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 16:14:22.584113  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 16:14:22.584150  881469 main.go:141] libmachine: Detecting the provisioner...
	I1114 16:14:22.584162  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.587100  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.587496  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.587533  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.587647  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.587854  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.588086  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.588282  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.588472  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.588880  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.588894  881469 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 16:14:22.713853  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 16:14:22.714001  881469 main.go:141] libmachine: found compatible host: buildroot
	I1114 16:14:22.714021  881469 main.go:141] libmachine: Provisioning with buildroot...
	I1114 16:14:22.714035  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:22.714353  881469 buildroot.go:166] provisioning hostname "newest-cni-161256"
	I1114 16:14:22.714397  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:22.714634  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.717497  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.717871  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.717902  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.718002  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.718218  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.718401  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.718569  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.718809  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.719156  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.719179  881469 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-161256 && echo "newest-cni-161256" | sudo tee /etc/hostname
	I1114 16:14:22.862571  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-161256
	
	I1114 16:14:22.862597  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.865536  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.865784  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.865817  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.866066  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.866276  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.866445  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.866579  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.866744  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.867182  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.867203  881469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-161256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-161256/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-161256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 16:14:23.001359  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 16:14:23.001407  881469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 16:14:23.001464  881469 buildroot.go:174] setting up certificates
	I1114 16:14:23.001485  881469 provision.go:83] configureAuth start
	I1114 16:14:23.001511  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:23.001901  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.004872  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.005238  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.005269  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.005429  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.007776  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.008237  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.008260  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.008470  881469 provision.go:138] copyHostCerts
	I1114 16:14:23.008534  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 16:14:23.008559  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 16:14:23.008659  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 16:14:23.008811  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 16:14:23.008830  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 16:14:23.008881  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 16:14:23.008960  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 16:14:23.008970  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 16:14:23.009025  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 16:14:23.009094  881469 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.newest-cni-161256 san=[192.168.72.15 192.168.72.15 localhost 127.0.0.1 minikube newest-cni-161256]
	I1114 16:14:23.079504  881469 provision.go:172] copyRemoteCerts
	I1114 16:14:23.079572  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 16:14:23.079600  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.082584  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.082929  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.082976  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.083207  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.083372  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.083537  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.083692  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.179440  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 16:14:23.202630  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 16:14:23.226109  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 16:14:23.249807  881469 provision.go:86] duration metric: configureAuth took 248.303658ms
	I1114 16:14:23.249837  881469 buildroot.go:189] setting minikube options for container-runtime
	I1114 16:14:23.250074  881469 config.go:182] Loaded profile config "newest-cni-161256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:14:23.250179  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.253266  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.253742  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.253777  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.254015  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.254251  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.254401  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.254555  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.254745  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:23.255215  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:23.255246  881469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 16:14:23.578903  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 16:14:23.578934  881469 main.go:141] libmachine: Checking connection to Docker...
	I1114 16:14:23.578944  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetURL
	I1114 16:14:23.580328  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using libvirt version 6000000
	I1114 16:14:23.583089  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.583490  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.583521  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.583676  881469 main.go:141] libmachine: Docker is up and running!
	I1114 16:14:23.583692  881469 main.go:141] libmachine: Reticulating splines...
	I1114 16:14:23.583699  881469 client.go:171] LocalClient.Create took 25.702286469s
	I1114 16:14:23.583722  881469 start.go:167] duration metric: libmachine.API.Create for "newest-cni-161256" took 25.702360903s
	I1114 16:14:23.583734  881469 start.go:300] post-start starting for "newest-cni-161256" (driver="kvm2")
	I1114 16:14:23.583742  881469 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 16:14:23.583775  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.584090  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 16:14:23.584123  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.586647  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.586970  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.587000  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.587141  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.587285  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.587384  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.587503  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.678050  881469 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 16:14:23.682156  881469 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 16:14:23.682188  881469 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 16:14:23.682263  881469 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 16:14:23.682436  881469 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 16:14:23.682596  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 16:14:23.690851  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 16:14:23.716446  881469 start.go:303] post-start completed in 132.696208ms
	I1114 16:14:23.716505  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:14:23.717172  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.719919  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.720304  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.720331  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.720639  881469 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json ...
	I1114 16:14:23.720874  881469 start.go:128] duration metric: createHost completed in 25.857531002s
	I1114 16:14:23.720903  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.723370  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.723733  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.723760  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.723892  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.724103  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.724271  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.724405  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.724612  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:23.724962  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:23.724976  881469 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 16:14:23.849570  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699978463.832211650
	
	I1114 16:14:23.849596  881469 fix.go:206] guest clock: 1699978463.832211650
	I1114 16:14:23.849606  881469 fix.go:219] Guest: 2023-11-14 16:14:23.83221165 +0000 UTC Remote: 2023-11-14 16:14:23.720887486 +0000 UTC m=+25.991128135 (delta=111.324164ms)
	I1114 16:14:23.849673  881469 fix.go:190] guest clock delta is within tolerance: 111.324164ms
	I1114 16:14:23.849681  881469 start.go:83] releasing machines lock for "newest-cni-161256", held for 25.986446906s
	I1114 16:14:23.849727  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.850024  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.853811  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.854242  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.854267  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.854457  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.854929  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.855189  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.855341  881469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 16:14:23.855383  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.855472  881469 ssh_runner.go:195] Run: cat /version.json
	I1114 16:14:23.855501  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.858531  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.858707  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.858984  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.859019  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.859041  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.859056  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.859226  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.859241  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.859435  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.859451  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.859662  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.859667  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.859823  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.859823  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.947110  881469 ssh_runner.go:195] Run: systemctl --version
	I1114 16:14:23.975201  881469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 16:14:24.146755  881469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 16:14:24.153898  881469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 16:14:24.153973  881469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 16:14:24.170773  881469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 16:14:24.170798  881469 start.go:472] detecting cgroup driver to use...
	I1114 16:14:24.170898  881469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 16:14:24.184315  881469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 16:14:24.195742  881469 docker.go:203] disabling cri-docker service (if available) ...
	I1114 16:14:24.195812  881469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 16:14:24.208418  881469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 16:14:24.220829  881469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 16:14:24.326701  881469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 16:14:24.448062  881469 docker.go:219] disabling docker service ...
	I1114 16:14:24.448137  881469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 16:14:24.461347  881469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 16:14:24.474044  881469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 16:14:24.588367  881469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 16:14:24.706443  881469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 16:14:24.718562  881469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 16:14:24.736225  881469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 16:14:24.736304  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.745622  881469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 16:14:24.745695  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.754757  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.763742  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.773060  881469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 16:14:24.782622  881469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 16:14:24.790914  881469 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 16:14:24.790977  881469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 16:14:24.804357  881469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 16:14:24.815049  881469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 16:14:24.928182  881469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 16:14:25.100061  881469 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 16:14:25.100131  881469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 16:14:25.105250  881469 start.go:540] Will wait 60s for crictl version
	I1114 16:14:25.105312  881469 ssh_runner.go:195] Run: which crictl
	I1114 16:14:25.109193  881469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 16:14:25.154864  881469 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 16:14:25.154991  881469 ssh_runner.go:195] Run: crio --version
	I1114 16:14:25.203888  881469 ssh_runner.go:195] Run: crio --version
	I1114 16:14:25.253040  881469 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 16:14:25.254574  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:25.257607  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:25.258099  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:25.258150  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:25.258401  881469 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1114 16:14:25.264052  881469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 16:14:25.277627  881469 localpath.go:92] copying /home/jenkins/minikube-integration/17598-824991/.minikube/client.crt -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/client.crt
	I1114 16:14:25.277799  881469 localpath.go:117] copying /home/jenkins/minikube-integration/17598-824991/.minikube/client.key -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/client.key
	I1114 16:14:25.279677  881469 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1114 16:14:25.281088  881469 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 16:14:25.281156  881469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 16:14:25.316141  881469 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 16:14:25.316211  881469 ssh_runner.go:195] Run: which lz4
	I1114 16:14:25.320451  881469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 16:14:25.324701  881469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 16:14:25.324727  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 16:14:27.091079  881469 crio.go:444] Took 1.770662 seconds to copy over tarball
	I1114 16:14:27.091142  881469 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 16:14:30.274095  881469 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.182924074s)
	I1114 16:14:30.274128  881469 crio.go:451] Took 3.183016 seconds to extract the tarball
	I1114 16:14:30.274162  881469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 16:14:30.315918  881469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 16:14:30.396235  881469 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 16:14:30.396268  881469 cache_images.go:84] Images are preloaded, skipping loading
	I1114 16:14:30.396355  881469 ssh_runner.go:195] Run: crio config
	I1114 16:14:30.463805  881469 cni.go:84] Creating CNI manager for ""
	I1114 16:14:30.463836  881469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 16:14:30.463864  881469 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1114 16:14:30.463892  881469 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.15 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-161256 NodeName:newest-cni-161256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 16:14:30.464097  881469 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-161256"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 16:14:30.464228  881469 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-161256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 16:14:30.464308  881469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 16:14:30.476779  881469 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 16:14:30.476930  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 16:14:30.489819  881469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (413 bytes)
	I1114 16:14:30.508452  881469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 16:14:30.525450  881469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1114 16:14:30.542846  881469 ssh_runner.go:195] Run: grep 192.168.72.15	control-plane.minikube.internal$ /etc/hosts
	I1114 16:14:30.547157  881469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 16:14:30.559590  881469 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256 for IP: 192.168.72.15
	I1114 16:14:30.559622  881469 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:30.559781  881469 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 16:14:30.559823  881469 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 16:14:30.559968  881469 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/client.key
	I1114 16:14:30.559992  881469 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key.9d44ac2f
	I1114 16:14:30.560002  881469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt.9d44ac2f with IP's: [192.168.72.15 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 16:14:31.152006  881469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt.9d44ac2f ...
	I1114 16:14:31.152043  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt.9d44ac2f: {Name:mk7a0f8fd163798dba5b4bbaf0c798188857d61b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:31.152213  881469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key.9d44ac2f ...
	I1114 16:14:31.152229  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key.9d44ac2f: {Name:mk5bbdb8ba1400011f29179b852e9a76cd67f55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:31.152301  881469 certs.go:337] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt.9d44ac2f -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt
	I1114 16:14:31.152370  881469 certs.go:341] copying /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key.9d44ac2f -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key
	I1114 16:14:31.152420  881469 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.key
	I1114 16:14:31.152440  881469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.crt with IP's: []
	I1114 16:14:31.399241  881469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.crt ...
	I1114 16:14:31.399276  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.crt: {Name:mkb08540938312209ab6b9e645f6fa4dce126237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:31.399445  881469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.key ...
	I1114 16:14:31.399463  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.key: {Name:mk5fa99c7428f44aea4a34e082153d46a09bd518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:31.399668  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 16:14:31.399726  881469 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 16:14:31.399744  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 16:14:31.399786  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 16:14:31.399823  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 16:14:31.399859  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 16:14:31.399915  881469 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 16:14:31.400522  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 16:14:31.424820  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 16:14:31.447674  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 16:14:31.474195  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 16:14:31.503068  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 16:14:31.530742  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 16:14:31.555074  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 16:14:31.581597  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 16:14:31.608688  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 16:14:31.632328  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 16:14:31.656558  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 16:14:31.680968  881469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 16:14:31.699612  881469 ssh_runner.go:195] Run: openssl version
	I1114 16:14:31.705184  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 16:14:31.716933  881469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 16:14:31.721554  881469 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 16:14:31.721623  881469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 16:14:31.727428  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 16:14:31.739191  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 16:14:31.750863  881469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 16:14:31.755885  881469 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 16:14:31.755955  881469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 16:14:31.761852  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 16:14:31.772272  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 16:14:31.782987  881469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 16:14:31.787920  881469 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 16:14:31.787981  881469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 16:14:31.793991  881469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 16:14:31.806631  881469 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 16:14:31.811199  881469 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 16:14:31.811263  881469 kubeadm.go:404] StartCluster: {Name:newest-cni-161256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.15 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 16:14:31.811346  881469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 16:14:31.811420  881469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 16:14:31.858922  881469 cri.go:89] found id: ""
	I1114 16:14:31.859017  881469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 16:14:31.871806  881469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 16:14:31.882802  881469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 16:14:31.897862  881469 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 16:14:31.897907  881469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 16:14:32.010801  881469 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 16:14:32.010875  881469 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 16:14:32.271904  881469 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 16:14:32.272062  881469 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 16:14:32.272207  881469 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 16:14:32.514635  881469 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 16:14:32.587880  881469 out.go:204]   - Generating certificates and keys ...
	I1114 16:14:32.588052  881469 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 16:14:32.588181  881469 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 16:14:32.792467  881469 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 16:14:32.979187  881469 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 16:14:33.252455  881469 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 16:14:33.352144  881469 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 16:14:33.513446  881469 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 16:14:33.513830  881469 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-161256] and IPs [192.168.72.15 127.0.0.1 ::1]
	I1114 16:14:33.608223  881469 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 16:14:33.608526  881469 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-161256] and IPs [192.168.72.15 127.0.0.1 ::1]
	I1114 16:14:33.725260  881469 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 16:14:33.985399  881469 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 16:14:34.138538  881469 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 16:14:34.138630  881469 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 16:14:34.281614  881469 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 16:14:34.676629  881469 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 16:14:34.875265  881469 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 16:14:35.062953  881469 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 16:14:35.063690  881469 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 16:14:35.067057  881469 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 16:14:35.068905  881469 out.go:204]   - Booting up control plane ...
	I1114 16:14:35.069051  881469 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 16:14:35.069146  881469 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 16:14:35.069232  881469 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 16:14:35.085612  881469 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 16:14:35.087860  881469 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 16:14:35.087927  881469 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 16:14:35.215028  881469 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 16:14:42.715531  881469 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504409 seconds
	I1114 16:14:42.715659  881469 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 16:14:42.729557  881469 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 16:14:43.257479  881469 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 16:14:43.257778  881469 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-161256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 16:14:43.771119  881469 kubeadm.go:322] [bootstrap-token] Using token: uhqyxr.3xxrsov8bc7ey8v3
	I1114 16:14:43.772560  881469 out.go:204]   - Configuring RBAC rules ...
	I1114 16:14:43.772698  881469 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 16:14:43.778234  881469 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 16:14:43.786358  881469 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 16:14:43.794018  881469 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 16:14:43.803529  881469 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 16:14:43.806899  881469 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 16:14:43.825876  881469 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 16:14:44.080664  881469 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 16:14:44.188412  881469 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 16:14:44.188456  881469 kubeadm.go:322] 
	I1114 16:14:44.188521  881469 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 16:14:44.188532  881469 kubeadm.go:322] 
	I1114 16:14:44.188621  881469 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 16:14:44.188636  881469 kubeadm.go:322] 
	I1114 16:14:44.188677  881469 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 16:14:44.188760  881469 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 16:14:44.188824  881469 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 16:14:44.188836  881469 kubeadm.go:322] 
	I1114 16:14:44.188946  881469 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 16:14:44.188972  881469 kubeadm.go:322] 
	I1114 16:14:44.189047  881469 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 16:14:44.189059  881469 kubeadm.go:322] 
	I1114 16:14:44.189129  881469 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 16:14:44.189221  881469 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 16:14:44.189284  881469 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 16:14:44.189289  881469 kubeadm.go:322] 
	I1114 16:14:44.189400  881469 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 16:14:44.189517  881469 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 16:14:44.189530  881469 kubeadm.go:322] 
	I1114 16:14:44.189604  881469 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uhqyxr.3xxrsov8bc7ey8v3 \
	I1114 16:14:44.189728  881469 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 16:14:44.189767  881469 kubeadm.go:322] 	--control-plane 
	I1114 16:14:44.189780  881469 kubeadm.go:322] 
	I1114 16:14:44.189892  881469 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 16:14:44.189903  881469 kubeadm.go:322] 
	I1114 16:14:44.190016  881469 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uhqyxr.3xxrsov8bc7ey8v3 \
	I1114 16:14:44.190171  881469 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 16:14:44.190484  881469 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 16:14:44.190520  881469 cni.go:84] Creating CNI manager for ""
	I1114 16:14:44.190531  881469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 16:14:44.192404  881469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 16:14:44.193891  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 16:14:44.219030  881469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 16:14:44.308418  881469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 16:14:44.308493  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:44.308511  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=newest-cni-161256 minikube.k8s.io/updated_at=2023_11_14T16_14_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:44.359263  881469 ops.go:34] apiserver oom_adj: -16
	I1114 16:14:44.577430  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:44.678967  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:45.274214  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:45.773837  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:46.274265  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:46.773977  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:47.273940  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:47.774384  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:48.274496  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:48.773964  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:49.274317  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:49.773586  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:50.273559  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:50.774037  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:51.273557  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:51.773561  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:52.274274  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:52.774370  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:53.273609  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:53.774008  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:54.273710  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:54.774682  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:55.274050  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:55.773837  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:56.273704  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:56.773984  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:57.274187  881469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:14:57.429687  881469 kubeadm.go:1081] duration metric: took 13.121250367s to wait for elevateKubeSystemPrivileges.
	I1114 16:14:57.429736  881469 kubeadm.go:406] StartCluster complete in 25.618479184s
	I1114 16:14:57.429763  881469 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:57.429869  881469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:14:57.431283  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:14:57.431544  881469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 16:14:57.431686  881469 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 16:14:57.431766  881469 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-161256"
	I1114 16:14:57.431785  881469 config.go:182] Loaded profile config "newest-cni-161256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:14:57.431791  881469 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-161256"
	I1114 16:14:57.431797  881469 addons.go:69] Setting default-storageclass=true in profile "newest-cni-161256"
	I1114 16:14:57.431827  881469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-161256"
	I1114 16:14:57.431872  881469 host.go:66] Checking if "newest-cni-161256" exists ...
	I1114 16:14:57.432305  881469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:14:57.432349  881469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:14:57.432358  881469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:14:57.432384  881469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:14:57.448657  881469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1114 16:14:57.449082  881469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I1114 16:14:57.449165  881469 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:14:57.449690  881469 main.go:141] libmachine: Using API Version  1
	I1114 16:14:57.449711  881469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:14:57.449743  881469 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:14:57.450075  881469 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:14:57.450270  881469 main.go:141] libmachine: Using API Version  1
	I1114 16:14:57.450297  881469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:14:57.450656  881469 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:14:57.450815  881469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:14:57.450853  881469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:14:57.450866  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetState
	I1114 16:14:57.454442  881469 addons.go:231] Setting addon default-storageclass=true in "newest-cni-161256"
	I1114 16:14:57.454493  881469 host.go:66] Checking if "newest-cni-161256" exists ...
	I1114 16:14:57.454936  881469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:14:57.454974  881469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:14:57.462204  881469 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-161256" context rescaled to 1 replicas
	I1114 16:14:57.462250  881469 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.15 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:14:57.463925  881469 out.go:177] * Verifying Kubernetes components...
	I1114 16:14:57.465717  881469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:14:57.467916  881469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I1114 16:14:57.468376  881469 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:14:57.469008  881469 main.go:141] libmachine: Using API Version  1
	I1114 16:14:57.469036  881469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:14:57.469461  881469 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:14:57.469697  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetState
	I1114 16:14:57.471451  881469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I1114 16:14:57.471674  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:57.473541  881469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 16:14:57.472375  881469 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:14:57.475533  881469 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:14:57.475552  881469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 16:14:57.475568  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:57.475892  881469 main.go:141] libmachine: Using API Version  1
	I1114 16:14:57.475919  881469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:14:57.476560  881469 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:14:57.477237  881469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:14:57.477274  881469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:14:57.480245  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:57.480805  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:57.480873  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:57.480986  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:57.481169  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:57.481345  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:57.481473  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:57.493985  881469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I1114 16:14:57.494663  881469 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:14:57.495184  881469 main.go:141] libmachine: Using API Version  1
	I1114 16:14:57.495204  881469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:14:57.495640  881469 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:14:57.495820  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetState
	I1114 16:14:57.497529  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:57.497823  881469 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 16:14:57.497841  881469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 16:14:57.497861  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:57.501535  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:57.501899  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:57.501925  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:57.502212  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:57.502396  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:57.502528  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:57.502621  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:57.706018  881469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 16:14:57.707092  881469 api_server.go:52] waiting for apiserver process to appear ...
	I1114 16:14:57.707148  881469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 16:14:57.789803  881469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 16:14:57.870020  881469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:14:59.273484  881469 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.567421593s)
	I1114 16:14:59.273522  881469 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1114 16:14:59.273557  881469 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.566397475s)
	I1114 16:14:59.273574  881469 api_server.go:72] duration metric: took 1.811292608s to wait for apiserver process to appear ...
	I1114 16:14:59.273585  881469 api_server.go:88] waiting for apiserver healthz status ...
	I1114 16:14:59.273605  881469 api_server.go:253] Checking apiserver healthz at https://192.168.72.15:8443/healthz ...
	I1114 16:14:59.276901  881469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.487031704s)
	I1114 16:14:59.276953  881469 main.go:141] libmachine: Making call to close driver server
	I1114 16:14:59.276966  881469 main.go:141] libmachine: (newest-cni-161256) Calling .Close
	I1114 16:14:59.277271  881469 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:14:59.277289  881469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:14:59.277299  881469 main.go:141] libmachine: Making call to close driver server
	I1114 16:14:59.277308  881469 main.go:141] libmachine: (newest-cni-161256) Calling .Close
	I1114 16:14:59.277537  881469 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:14:59.277549  881469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:14:59.277585  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Closing plugin on server side
	I1114 16:14:59.286826  881469 api_server.go:279] https://192.168.72.15:8443/healthz returned 200:
	ok
	I1114 16:14:59.288892  881469 api_server.go:141] control plane version: v1.28.3
	I1114 16:14:59.288915  881469 api_server.go:131] duration metric: took 15.323796ms to wait for apiserver health ...
	I1114 16:14:59.288925  881469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 16:14:59.289441  881469 main.go:141] libmachine: Making call to close driver server
	I1114 16:14:59.289459  881469 main.go:141] libmachine: (newest-cni-161256) Calling .Close
	I1114 16:14:59.289749  881469 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:14:59.289773  881469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:14:59.289783  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Closing plugin on server side
	I1114 16:14:59.304317  881469 system_pods.go:59] 7 kube-system pods found
	I1114 16:14:59.304350  881469 system_pods.go:61] "coredns-5dd5756b68-6n25v" [463666b1-329d-4db3-9e34-8fba03bf03dc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 16:14:59.304358  881469 system_pods.go:61] "coredns-5dd5756b68-qcnvr" [0965e6f1-7603-41be-aed0-ac0bda79ea61] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 16:14:59.304367  881469 system_pods.go:61] "etcd-newest-cni-161256" [56134871-a693-4ef6-8471-4bacaab749e7] Running
	I1114 16:14:59.304372  881469 system_pods.go:61] "kube-apiserver-newest-cni-161256" [7d778092-6228-4578-b67c-a89a235936bf] Running
	I1114 16:14:59.304376  881469 system_pods.go:61] "kube-controller-manager-newest-cni-161256" [0de87bed-3e42-457d-b970-750a675555d8] Running
	I1114 16:14:59.304382  881469 system_pods.go:61] "kube-proxy-h5l5t" [cdcabffb-866f-4481-96f0-a9d20197bacd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 16:14:59.304388  881469 system_pods.go:61] "kube-scheduler-newest-cni-161256" [354956cb-fff7-4c99-a357-434e2ce3b5d4] Running
	I1114 16:14:59.304395  881469 system_pods.go:74] duration metric: took 15.465739ms to wait for pod list to return data ...
	I1114 16:14:59.304405  881469 default_sa.go:34] waiting for default service account to be created ...
	I1114 16:14:59.308495  881469 default_sa.go:45] found service account: "default"
	I1114 16:14:59.308520  881469 default_sa.go:55] duration metric: took 4.107129ms for default service account to be created ...
	I1114 16:14:59.308532  881469 kubeadm.go:581] duration metric: took 1.846250113s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1114 16:14:59.308553  881469 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:14:59.312366  881469 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:14:59.312402  881469 node_conditions.go:123] node cpu capacity is 2
	I1114 16:14:59.312416  881469 node_conditions.go:105] duration metric: took 3.857517ms to run NodePressure ...
	I1114 16:14:59.312430  881469 start.go:228] waiting for startup goroutines ...
	I1114 16:14:59.608178  881469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.738112837s)
	I1114 16:14:59.608243  881469 main.go:141] libmachine: Making call to close driver server
	I1114 16:14:59.608257  881469 main.go:141] libmachine: (newest-cni-161256) Calling .Close
	I1114 16:14:59.608655  881469 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:14:59.608696  881469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:14:59.608728  881469 main.go:141] libmachine: Making call to close driver server
	I1114 16:14:59.608774  881469 main.go:141] libmachine: (newest-cni-161256) Calling .Close
	I1114 16:14:59.609117  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Closing plugin on server side
	I1114 16:14:59.609127  881469 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:14:59.609145  881469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:14:59.611159  881469 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1114 16:14:59.612908  881469 addons.go:502] enable addons completed in 2.181224119s: enabled=[default-storageclass storage-provisioner]
	I1114 16:14:59.612954  881469 start.go:233] waiting for cluster config update ...
	I1114 16:14:59.612967  881469 start.go:242] writing updated cluster config ...
	I1114 16:14:59.613202  881469 ssh_runner.go:195] Run: rm -f paused
	I1114 16:14:59.675016  881469 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 16:14:59.676724  881469 out.go:177] * Done! kubectl is now configured to use "newest-cni-161256" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:54:11 UTC, ends at Tue 2023-11-14 16:15:11 UTC. --
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.774496853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978511774483078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=53e0e54d-666d-4046-93a2-886a2cd5039c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.775070143Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=49bde2a6-9eec-4d11-a995-0aa84b19a2c4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.775117514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=49bde2a6-9eec-4d11-a995-0aa84b19a2c4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.775354810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977318370780888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e03edb96781074fb7437d6279e2de257cba318958364f6cff5688696ad114e6,PodSandboxId:f6c23dac7d3b539a10e7f075c4af5bb6632e916e274c38d274bac1737d740161,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1699977297013583666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e,},Annotations:map[string]string{io.kubernetes.container.hash: ad6d4c58,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a,PodSandboxId:b61185af9c4f3663a607c8a3bbd66bb055f012e4a6bd4d54f102bb9cf32fd14f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977295447549877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b8szg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac852af7-15e4-4112-9dff-c76da29439af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c7b1ae5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864,PodSandboxId:45952e1a5bc402cb6a7ef0d566033febe4f1a3bf1bbadeb93044439cef8ca6ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977288012549548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpchs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
53e58226-44f2-4482-a4f4-1628cbcad8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 152b5fb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699977287959500182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07,PodSandboxId:1974315b49394011d7934c5eb5ca2c5dd6a777e1d044ee9ead80a935696c9b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977281676693890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d2cc7dda878aa2753319688d2bf78a,},An
notations:map[string]string{io.kubernetes.container.hash: ae9d5c97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156,PodSandboxId:3bc7b2a145834917cf8c25d33a6b9a014b058866ea232f1f659c5ec90e38dd7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977281385901645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b739f850bf9dad80e8b8d3256c0ecd9,},An
notations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5,PodSandboxId:7f3f711eb9f7b79b3e7ca1069c7b55a7b394dac80051fc747809641dc09591a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977281420435617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad1d56d052707c4aeec01f950aca9707,},An
notations:map[string]string{io.kubernetes.container.hash: 8b932893,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3,PodSandboxId:8fc4ff502e05c37f0729069be2e23be14d70c5caedd91de4f04293c30056f729,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977281430792299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
96fe7c93be346ca7b1a5a5639d7a371,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=49bde2a6-9eec-4d11-a995-0aa84b19a2c4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.813449567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6363a9c6-e64b-4df7-8dc6-95631aea9dc9 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.813528570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6363a9c6-e64b-4df7-8dc6-95631aea9dc9 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.815831253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=eeb45fb4-a7e8-4abd-a003-7f6a713d76cf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.816304931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978511816290095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=eeb45fb4-a7e8-4abd-a003-7f6a713d76cf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.817083520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a2f2d175-b44a-4848-97ae-dab3f713d757 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.817154730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a2f2d175-b44a-4848-97ae-dab3f713d757 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.817390276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977318370780888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e03edb96781074fb7437d6279e2de257cba318958364f6cff5688696ad114e6,PodSandboxId:f6c23dac7d3b539a10e7f075c4af5bb6632e916e274c38d274bac1737d740161,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1699977297013583666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e,},Annotations:map[string]string{io.kubernetes.container.hash: ad6d4c58,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a,PodSandboxId:b61185af9c4f3663a607c8a3bbd66bb055f012e4a6bd4d54f102bb9cf32fd14f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977295447549877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b8szg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac852af7-15e4-4112-9dff-c76da29439af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c7b1ae5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864,PodSandboxId:45952e1a5bc402cb6a7ef0d566033febe4f1a3bf1bbadeb93044439cef8ca6ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977288012549548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpchs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
53e58226-44f2-4482-a4f4-1628cbcad8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 152b5fb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699977287959500182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07,PodSandboxId:1974315b49394011d7934c5eb5ca2c5dd6a777e1d044ee9ead80a935696c9b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977281676693890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d2cc7dda878aa2753319688d2bf78a,},An
notations:map[string]string{io.kubernetes.container.hash: ae9d5c97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156,PodSandboxId:3bc7b2a145834917cf8c25d33a6b9a014b058866ea232f1f659c5ec90e38dd7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977281385901645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b739f850bf9dad80e8b8d3256c0ecd9,},An
notations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5,PodSandboxId:7f3f711eb9f7b79b3e7ca1069c7b55a7b394dac80051fc747809641dc09591a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977281420435617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad1d56d052707c4aeec01f950aca9707,},An
notations:map[string]string{io.kubernetes.container.hash: 8b932893,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3,PodSandboxId:8fc4ff502e05c37f0729069be2e23be14d70c5caedd91de4f04293c30056f729,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977281430792299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
96fe7c93be346ca7b1a5a5639d7a371,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a2f2d175-b44a-4848-97ae-dab3f713d757 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.858299318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=564fb281-61d7-48b3-bdbf-c580056c1689 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.858366555Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=564fb281-61d7-48b3-bdbf-c580056c1689 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.859991777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=142c553f-a0be-4744-92b8-7c3a58c90a32 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.860442582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978511860428491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=142c553f-a0be-4744-92b8-7c3a58c90a32 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.860941011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6f00b787-7140-44f5-84df-c273c70dbae1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.860991219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6f00b787-7140-44f5-84df-c273c70dbae1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.861159437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977318370780888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e03edb96781074fb7437d6279e2de257cba318958364f6cff5688696ad114e6,PodSandboxId:f6c23dac7d3b539a10e7f075c4af5bb6632e916e274c38d274bac1737d740161,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1699977297013583666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e,},Annotations:map[string]string{io.kubernetes.container.hash: ad6d4c58,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a,PodSandboxId:b61185af9c4f3663a607c8a3bbd66bb055f012e4a6bd4d54f102bb9cf32fd14f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977295447549877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b8szg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac852af7-15e4-4112-9dff-c76da29439af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c7b1ae5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864,PodSandboxId:45952e1a5bc402cb6a7ef0d566033febe4f1a3bf1bbadeb93044439cef8ca6ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977288012549548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpchs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
53e58226-44f2-4482-a4f4-1628cbcad8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 152b5fb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699977287959500182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07,PodSandboxId:1974315b49394011d7934c5eb5ca2c5dd6a777e1d044ee9ead80a935696c9b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977281676693890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d2cc7dda878aa2753319688d2bf78a,},An
notations:map[string]string{io.kubernetes.container.hash: ae9d5c97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156,PodSandboxId:3bc7b2a145834917cf8c25d33a6b9a014b058866ea232f1f659c5ec90e38dd7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977281385901645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b739f850bf9dad80e8b8d3256c0ecd9,},An
notations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5,PodSandboxId:7f3f711eb9f7b79b3e7ca1069c7b55a7b394dac80051fc747809641dc09591a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977281420435617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad1d56d052707c4aeec01f950aca9707,},An
notations:map[string]string{io.kubernetes.container.hash: 8b932893,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3,PodSandboxId:8fc4ff502e05c37f0729069be2e23be14d70c5caedd91de4f04293c30056f729,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977281430792299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
96fe7c93be346ca7b1a5a5639d7a371,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6f00b787-7140-44f5-84df-c273c70dbae1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.901612580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=868bdbc9-45dd-4b4b-9810-1e3da1e4df4d name=/runtime.v1.RuntimeService/Version
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.901696571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=868bdbc9-45dd-4b4b-9810-1e3da1e4df4d name=/runtime.v1.RuntimeService/Version
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.903566233Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a25fb793-0707-43c9-b28f-216691fc3651 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.903956426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978511903940990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a25fb793-0707-43c9-b28f-216691fc3651 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.904670132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=122cf0cc-703f-44f7-ab74-20caaf3657e0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.904739938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=122cf0cc-703f-44f7-ab74-20caaf3657e0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:15:11 default-k8s-diff-port-529430 crio[726]: time="2023-11-14 16:15:11.904922612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977318370780888,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e03edb96781074fb7437d6279e2de257cba318958364f6cff5688696ad114e6,PodSandboxId:f6c23dac7d3b539a10e7f075c4af5bb6632e916e274c38d274bac1737d740161,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1699977297013583666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e,},Annotations:map[string]string{io.kubernetes.container.hash: ad6d4c58,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a,PodSandboxId:b61185af9c4f3663a607c8a3bbd66bb055f012e4a6bd4d54f102bb9cf32fd14f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699977295447549877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-b8szg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac852af7-15e4-4112-9dff-c76da29439af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c7b1ae5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864,PodSandboxId:45952e1a5bc402cb6a7ef0d566033febe4f1a3bf1bbadeb93044439cef8ca6ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699977288012549548,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpchs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
53e58226-44f2-4482-a4f4-1628cbcad8f9,},Annotations:map[string]string{io.kubernetes.container.hash: 152b5fb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8,PodSandboxId:07d79896994bbf25bac080f68946c368ddd17431ccdfe0575f52548965f926d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699977287959500182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
934b414-9ec6-40dd-be45-6c6ab42dd75b,},Annotations:map[string]string{io.kubernetes.container.hash: c8fe6f6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07,PodSandboxId:1974315b49394011d7934c5eb5ca2c5dd6a777e1d044ee9ead80a935696c9b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699977281676693890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39d2cc7dda878aa2753319688d2bf78a,},An
notations:map[string]string{io.kubernetes.container.hash: ae9d5c97,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156,PodSandboxId:3bc7b2a145834917cf8c25d33a6b9a014b058866ea232f1f659c5ec90e38dd7f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699977281385901645,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b739f850bf9dad80e8b8d3256c0ecd9,},An
notations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5,PodSandboxId:7f3f711eb9f7b79b3e7ca1069c7b55a7b394dac80051fc747809641dc09591a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699977281420435617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad1d56d052707c4aeec01f950aca9707,},An
notations:map[string]string{io.kubernetes.container.hash: 8b932893,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3,PodSandboxId:8fc4ff502e05c37f0729069be2e23be14d70c5caedd91de4f04293c30056f729,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699977281430792299,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-529430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
96fe7c93be346ca7b1a5a5639d7a371,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=122cf0cc-703f-44f7-ab74-20caaf3657e0 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	19e99b311805a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   07d79896994bb       storage-provisioner
	7e03edb967810       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   f6c23dac7d3b5       busybox
	335b691953328       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   b61185af9c4f3       coredns-5dd5756b68-b8szg
	a9e10dc7650db       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      20 minutes ago      Running             kube-proxy                1                   45952e1a5bc40       kube-proxy-zpchs
	251b882e2626a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   07d79896994bb       storage-provisioner
	ab4ac318c279a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      20 minutes ago      Running             etcd                      1                   1974315b49394       etcd-default-k8s-diff-port-529430
	96d5f7a9c1434       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      20 minutes ago      Running             kube-controller-manager   1                   8fc4ff502e05c       kube-controller-manager-default-k8s-diff-port-529430
	c8ca3bf950b59       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      20 minutes ago      Running             kube-apiserver            1                   7f3f711eb9f7b       kube-apiserver-default-k8s-diff-port-529430
	bde54fa8d8b9d       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      20 minutes ago      Running             kube-scheduler            1                   3bc7b2a145834       kube-scheduler-default-k8s-diff-port-529430
	
	* 
	* ==> coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44655 - 3978 "HINFO IN 8021990947516006082.6706459765484640430. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013127245s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-529430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-529430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=default-k8s-diff-port-529430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_46_13_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:46:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-529430
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 16:15:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 16:10:35 +0000   Tue, 14 Nov 2023 15:46:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 16:10:35 +0000   Tue, 14 Nov 2023 15:46:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 16:10:35 +0000   Tue, 14 Nov 2023 15:46:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 16:10:35 +0000   Tue, 14 Nov 2023 15:54:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.196
	  Hostname:    default-k8s-diff-port-529430
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a20cb6a3ff3846808fbb02ac20cde918
	  System UUID:                a20cb6a3-ff38-4680-8fbb-02ac20cde918
	  Boot ID:                    4a895212-5e91-4626-b198-6d476df0a51a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-b8szg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-529430                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-529430             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-529430    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-zpchs                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-529430             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-ss2ks                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-529430 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-529430 event: Registered Node default-k8s-diff-port-529430 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-529430 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-529430 event: Registered Node default-k8s-diff-port-529430 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov14 15:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.078958] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.792531] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.615335] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154088] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.506850] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.330399] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.123913] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.184703] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.150943] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.253952] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +17.937293] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +15.082527] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] <==
	* {"level":"info","ts":"2023-11-14T15:54:44.490821Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:54:51.154303Z","caller":"traceutil/trace.go:171","msg":"trace[16869641] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"336.756548ms","start":"2023-11-14T15:54:50.81753Z","end":"2023-11-14T15:54:51.154286Z","steps":["trace[16869641] 'process raft request'  (duration: 335.954847ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T15:54:51.154743Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-14T15:54:50.817516Z","time spent":"336.864339ms","remote":"127.0.0.1:36828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":767,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17978855fce967dd\" mod_revision:542 > success:<request_put:<key:\"/registry/events/default/busybox.17978855fce967dd\" value_size:700 lease:4841241843654433660 >> failure:<request_range:<key:\"/registry/events/default/busybox.17978855fce967dd\" > >"}
	{"level":"info","ts":"2023-11-14T15:54:58.973809Z","caller":"traceutil/trace.go:171","msg":"trace[62756389] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"172.508662ms","start":"2023-11-14T15:54:58.801286Z","end":"2023-11-14T15:54:58.973795Z","steps":["trace[62756389] 'process raft request'  (duration: 172.373551ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T15:54:58.975577Z","caller":"traceutil/trace.go:171","msg":"trace[553473316] linearizableReadLoop","detail":"{readStateIndex:610; appliedIndex:609; }","duration":"152.241199ms","start":"2023-11-14T15:54:58.823326Z","end":"2023-11-14T15:54:58.975567Z","steps":["trace[553473316] 'read index received'  (duration: 150.625564ms)","trace[553473316] 'applied index is now lower than readState.Index'  (duration: 1.615195ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-14T15:54:58.97579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.230746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-11-14T15:54:58.975971Z","caller":"traceutil/trace.go:171","msg":"trace[2116139502] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:570; }","duration":"102.421032ms","start":"2023-11-14T15:54:58.873535Z","end":"2023-11-14T15:54:58.975956Z","steps":["trace[2116139502] 'agreement among raft nodes before linearized reading'  (duration: 102.196152ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T15:54:58.976058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.73489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-11-14T15:54:58.976696Z","caller":"traceutil/trace.go:171","msg":"trace[1370932989] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:570; }","duration":"153.34086ms","start":"2023-11-14T15:54:58.823308Z","end":"2023-11-14T15:54:58.976649Z","steps":["trace[1370932989] 'agreement among raft nodes before linearized reading'  (duration: 152.658321ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T15:54:58.975898Z","caller":"traceutil/trace.go:171","msg":"trace[636378504] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"173.597694ms","start":"2023-11-14T15:54:58.802292Z","end":"2023-11-14T15:54:58.97589Z","steps":["trace[636378504] 'process raft request'  (duration: 173.13509ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T16:04:44.530149Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2023-11-14T16:04:44.533021Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":825,"took":"2.431662ms","hash":2229308458}
	{"level":"info","ts":"2023-11-14T16:04:44.533109Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2229308458,"revision":825,"compact-revision":-1}
	{"level":"info","ts":"2023-11-14T16:09:44.538105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1067}
	{"level":"info","ts":"2023-11-14T16:09:44.539927Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1067,"took":"1.277572ms","hash":2935693703}
	{"level":"info","ts":"2023-11-14T16:09:44.5401Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2935693703,"revision":1067,"compact-revision":825}
	{"level":"warn","ts":"2023-11-14T16:14:30.556751Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.080358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T16:14:30.556973Z","caller":"traceutil/trace.go:171","msg":"trace[1623534368] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1541; }","duration":"138.385125ms","start":"2023-11-14T16:14:30.418551Z","end":"2023-11-14T16:14:30.556936Z","steps":["trace[1623534368] 'range keys from in-memory index tree'  (duration: 137.97034ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T16:14:32.635714Z","caller":"traceutil/trace.go:171","msg":"trace[1179944266] linearizableReadLoop","detail":"{readStateIndex:1826; appliedIndex:1825; }","duration":"216.017765ms","start":"2023-11-14T16:14:32.419674Z","end":"2023-11-14T16:14:32.635692Z","steps":["trace[1179944266] 'read index received'  (duration: 215.834853ms)","trace[1179944266] 'applied index is now lower than readState.Index'  (duration: 182.395µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-14T16:14:32.635947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.276168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T16:14:32.636125Z","caller":"traceutil/trace.go:171","msg":"trace[1026079823] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1544; }","duration":"216.467332ms","start":"2023-11-14T16:14:32.41964Z","end":"2023-11-14T16:14:32.636107Z","steps":["trace[1026079823] 'agreement among raft nodes before linearized reading'  (duration: 216.235303ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T16:14:32.636341Z","caller":"traceutil/trace.go:171","msg":"trace[132860066] transaction","detail":"{read_only:false; response_revision:1544; number_of_response:1; }","duration":"280.013645ms","start":"2023-11-14T16:14:32.356314Z","end":"2023-11-14T16:14:32.636328Z","steps":["trace[132860066] 'process raft request'  (duration: 279.238409ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T16:14:44.544828Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1309}
	{"level":"info","ts":"2023-11-14T16:14:44.546813Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1309,"took":"1.715099ms","hash":1960593523}
	{"level":"info","ts":"2023-11-14T16:14:44.547048Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1960593523,"revision":1309,"compact-revision":1067}
	
	* 
	* ==> kernel <==
	*  16:15:12 up 21 min,  0 users,  load average: 0.10, 0.22, 0.23
	Linux default-k8s-diff-port-529430 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] <==
	* W1114 16:10:47.416028       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:10:47.416118       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:10:47.416704       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:11:46.202153       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 16:12:46.201969       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:12:47.415937       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:12:47.416296       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:12:47.416375       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:12:47.416825       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:12:47.416887       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:12:47.417545       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:13:46.202062       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 16:14:46.201462       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:14:46.421446       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:14:46.421886       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:14:46.422454       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:14:47.422761       1 handler_proxy.go:93] no RequestInfo found in the context
	W1114 16:14:47.422814       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:14:47.422953       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:14:47.422962       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1114 16:14:47.423029       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:14:47.425114       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] <==
	* I1114 16:09:29.596089       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:09:59.156595       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:09:59.603951       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:10:29.164681       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:10:29.616130       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:10:59.170827       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:10:59.626688       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:11:12.175711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="399.22µs"
	I1114 16:11:23.173035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="239.115µs"
	E1114 16:11:29.176121       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:11:29.637005       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:11:59.181964       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:11:59.645869       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:12:29.189084       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:12:29.655321       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:12:59.195141       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:12:59.664654       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:13:29.200671       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:13:29.672815       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:13:59.209397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:13:59.685675       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:14:29.218300       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:14:29.701012       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:14:59.225041       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:14:59.711703       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] <==
	* I1114 15:54:48.321473       1 server_others.go:69] "Using iptables proxy"
	I1114 15:54:48.340907       1 node.go:141] Successfully retrieved node IP: 192.168.61.196
	I1114 15:54:48.558817       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 15:54:48.559092       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 15:54:48.571155       1 server_others.go:152] "Using iptables Proxier"
	I1114 15:54:48.571510       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 15:54:48.579689       1 server.go:846] "Version info" version="v1.28.3"
	I1114 15:54:48.579819       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:54:48.584466       1 config.go:188] "Starting service config controller"
	I1114 15:54:48.584523       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 15:54:48.584558       1 config.go:97] "Starting endpoint slice config controller"
	I1114 15:54:48.584573       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 15:54:48.586455       1 config.go:315] "Starting node config controller"
	I1114 15:54:48.586681       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 15:54:48.685030       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 15:54:48.685108       1 shared_informer.go:318] Caches are synced for service config
	I1114 15:54:48.686893       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] <==
	* I1114 15:54:44.499682       1 serving.go:348] Generated self-signed cert in-memory
	W1114 15:54:46.361284       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1114 15:54:46.361408       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 15:54:46.361420       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 15:54:46.361426       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 15:54:46.402065       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1114 15:54:46.402305       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 15:54:46.406904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1114 15:54:46.406962       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 15:54:46.408425       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1114 15:54:46.408528       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1114 15:54:46.507354       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:54:11 UTC, ends at Tue 2023-11-14 16:15:12 UTC. --
	Nov 14 16:12:40 default-k8s-diff-port-529430 kubelet[933]: E1114 16:12:40.177639     933 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:12:40 default-k8s-diff-port-529430 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:12:40 default-k8s-diff-port-529430 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:12:40 default-k8s-diff-port-529430 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:12:48 default-k8s-diff-port-529430 kubelet[933]: E1114 16:12:48.158703     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:12:59 default-k8s-diff-port-529430 kubelet[933]: E1114 16:12:59.157351     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:13:12 default-k8s-diff-port-529430 kubelet[933]: E1114 16:13:12.158346     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:13:24 default-k8s-diff-port-529430 kubelet[933]: E1114 16:13:24.157388     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:13:36 default-k8s-diff-port-529430 kubelet[933]: E1114 16:13:36.158485     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:13:40 default-k8s-diff-port-529430 kubelet[933]: E1114 16:13:40.177106     933 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:13:40 default-k8s-diff-port-529430 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:13:40 default-k8s-diff-port-529430 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:13:40 default-k8s-diff-port-529430 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:13:47 default-k8s-diff-port-529430 kubelet[933]: E1114 16:13:47.157131     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:13:59 default-k8s-diff-port-529430 kubelet[933]: E1114 16:13:59.157395     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:14:14 default-k8s-diff-port-529430 kubelet[933]: E1114 16:14:14.158481     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:14:25 default-k8s-diff-port-529430 kubelet[933]: E1114 16:14:25.157061     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:14:38 default-k8s-diff-port-529430 kubelet[933]: E1114 16:14:38.163657     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:14:40 default-k8s-diff-port-529430 kubelet[933]: E1114 16:14:40.145946     933 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Nov 14 16:14:40 default-k8s-diff-port-529430 kubelet[933]: E1114 16:14:40.182916     933 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:14:40 default-k8s-diff-port-529430 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:14:40 default-k8s-diff-port-529430 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:14:40 default-k8s-diff-port-529430 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:14:49 default-k8s-diff-port-529430 kubelet[933]: E1114 16:14:49.157489     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	Nov 14 16:15:03 default-k8s-diff-port-529430 kubelet[933]: E1114 16:15:03.156617     933 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ss2ks" podUID="73fc9292-8667-473e-b3ca-43c4ae9fbdb9"
	
	* 
	* ==> storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] <==
	* I1114 15:55:18.477990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 15:55:18.495268       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 15:55:18.495351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 15:55:35.901536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 15:55:35.902304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7d5a72e5-d297-4c5a-85e9-7507bad408b6", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-529430_2806e63b-34b1-4ed2-93a5-38b89e4eb2c2 became leader
	I1114 15:55:35.902431       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-529430_2806e63b-34b1-4ed2-93a5-38b89e4eb2c2!
	I1114 15:55:36.003531       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-529430_2806e63b-34b1-4ed2-93a5-38b89e4eb2c2!
	
	* 
	* ==> storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] <==
	* I1114 15:54:48.142003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1114 15:55:18.143621       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-529430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ss2ks
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-529430 describe pod metrics-server-57f55c9bc5-ss2ks
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-529430 describe pod metrics-server-57f55c9bc5-ss2ks: exit status 1 (79.620933ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ss2ks" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-529430 describe pod metrics-server-57f55c9bc5-ss2ks: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (414.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (308.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1114 16:09:39.652889  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-490998 -n no-preload-490998
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-14 16:14:26.959460521 +0000 UTC m=+5734.489645544
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-490998 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-490998 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.949µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-490998 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-490998 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-490998 logs -n 25: (1.400634547s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-331502 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | disable-driver-mounts-331502                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:47 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-490998             | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-279880            | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-842105        | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-529430  | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC | 14 Nov 23 15:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC |                     |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-490998                  | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-279880                 | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 15:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-842105             | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-529430       | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 15:59 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 16:13 UTC | 14 Nov 23 16:13 UTC |
	| start   | -p newest-cni-161256 --memory=2200 --alsologtostderr   | newest-cni-161256            | jenkins | v1.32.0 | 14 Nov 23 16:13 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 16:13:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 16:13:57.784836  881469 out.go:296] Setting OutFile to fd 1 ...
	I1114 16:13:57.785128  881469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 16:13:57.785138  881469 out.go:309] Setting ErrFile to fd 2...
	I1114 16:13:57.785146  881469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 16:13:57.785348  881469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 16:13:57.785980  881469 out.go:303] Setting JSON to false
	I1114 16:13:57.787108  881469 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":46590,"bootTime":1699931848,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 16:13:57.787173  881469 start.go:138] virtualization: kvm guest
	I1114 16:13:57.789820  881469 out.go:177] * [newest-cni-161256] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 16:13:57.791257  881469 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 16:13:57.791324  881469 notify.go:220] Checking for updates...
	I1114 16:13:57.792683  881469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 16:13:57.794219  881469 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:13:57.795667  881469 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:57.797148  881469 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 16:13:57.798544  881469 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 16:13:57.800427  881469 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800574  881469 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800696  881469 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:13:57.800869  881469 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 16:13:57.840976  881469 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 16:13:57.842309  881469 start.go:298] selected driver: kvm2
	I1114 16:13:57.842324  881469 start.go:902] validating driver "kvm2" against <nil>
	I1114 16:13:57.842335  881469 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 16:13:57.843244  881469 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 16:13:57.843340  881469 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 16:13:57.858215  881469 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 16:13:57.858276  881469 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1114 16:13:57.858298  881469 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1114 16:13:57.858505  881469 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1114 16:13:57.858616  881469 cni.go:84] Creating CNI manager for ""
	I1114 16:13:57.858636  881469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 16:13:57.858647  881469 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1114 16:13:57.858656  881469 start_flags.go:323] config:
	{Name:newest-cni-161256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 16:13:57.858813  881469 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 16:13:57.861088  881469 out.go:177] * Starting control plane node newest-cni-161256 in cluster newest-cni-161256
	I1114 16:13:57.862595  881469 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 16:13:57.862632  881469 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 16:13:57.862691  881469 cache.go:56] Caching tarball of preloaded images
	I1114 16:13:57.862796  881469 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 16:13:57.862812  881469 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 16:13:57.862916  881469 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json ...
	I1114 16:13:57.862949  881469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json: {Name:mka288a2361f2be2d9a752ce4e344331e93a7d9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:13:57.863168  881469 start.go:365] acquiring machines lock for newest-cni-161256: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 16:13:57.863222  881469 start.go:369] acquired machines lock for "newest-cni-161256" in 33.515µs
	I1114 16:13:57.863248  881469 start.go:93] Provisioning new machine with config: &{Name:newest-cni-161256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-161256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:13:57.863331  881469 start.go:125] createHost starting for "" (driver="kvm2")
	I1114 16:13:57.865053  881469 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1114 16:13:57.865182  881469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:13:57.865231  881469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:13:57.879338  881469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I1114 16:13:57.879746  881469 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:13:57.880279  881469 main.go:141] libmachine: Using API Version  1
	I1114 16:13:57.880306  881469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:13:57.880723  881469 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:13:57.880962  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:13:57.881183  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:13:57.881364  881469 start.go:159] libmachine.API.Create for "newest-cni-161256" (driver="kvm2")
	I1114 16:13:57.881402  881469 client.go:168] LocalClient.Create starting
	I1114 16:13:57.881465  881469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem
	I1114 16:13:57.881513  881469 main.go:141] libmachine: Decoding PEM data...
	I1114 16:13:57.881534  881469 main.go:141] libmachine: Parsing certificate...
	I1114 16:13:57.881631  881469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem
	I1114 16:13:57.881666  881469 main.go:141] libmachine: Decoding PEM data...
	I1114 16:13:57.881685  881469 main.go:141] libmachine: Parsing certificate...
	I1114 16:13:57.881723  881469 main.go:141] libmachine: Running pre-create checks...
	I1114 16:13:57.881758  881469 main.go:141] libmachine: (newest-cni-161256) Calling .PreCreateCheck
	I1114 16:13:57.882257  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:13:57.882866  881469 main.go:141] libmachine: Creating machine...
	I1114 16:13:57.882890  881469 main.go:141] libmachine: (newest-cni-161256) Calling .Create
	I1114 16:13:57.883081  881469 main.go:141] libmachine: (newest-cni-161256) Creating KVM machine...
	I1114 16:13:57.884479  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found existing default KVM network
	I1114 16:13:57.885821  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.885625  881491 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7a:f8:83} reservation:<nil>}
	I1114 16:13:57.886569  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.886459  881491 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:58:bc} reservation:<nil>}
	I1114 16:13:57.887505  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.887399  881491 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:64:42} reservation:<nil>}
	I1114 16:13:57.888668  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.888578  881491 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e7120}
	I1114 16:13:57.894409  881469 main.go:141] libmachine: (newest-cni-161256) DBG | trying to create private KVM network mk-newest-cni-161256 192.168.72.0/24...
	I1114 16:13:57.973147  881469 main.go:141] libmachine: (newest-cni-161256) DBG | private KVM network mk-newest-cni-161256 192.168.72.0/24 created
	I1114 16:13:57.973201  881469 main.go:141] libmachine: (newest-cni-161256) Setting up store path in /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 ...
	I1114 16:13:57.973221  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:57.973079  881491 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:57.973318  881469 main.go:141] libmachine: (newest-cni-161256) Building disk image from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 16:13:57.973397  881469 main.go:141] libmachine: (newest-cni-161256) Downloading /home/jenkins/minikube-integration/17598-824991/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso...
	I1114 16:13:58.236968  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.236841  881491 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa...
	I1114 16:13:58.389420  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.389261  881491 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/newest-cni-161256.rawdisk...
	I1114 16:13:58.389453  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Writing magic tar header
	I1114 16:13:58.389471  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Writing SSH key tar header
	I1114 16:13:58.389480  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:58.389421  881491 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 ...
	I1114 16:13:58.389546  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256
	I1114 16:13:58.389602  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256 (perms=drwx------)
	I1114 16:13:58.389630  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube/machines (perms=drwxr-xr-x)
	I1114 16:13:58.389644  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube/machines
	I1114 16:13:58.389655  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 16:13:58.389680  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991/.minikube (perms=drwxr-xr-x)
	I1114 16:13:58.389693  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration/17598-824991 (perms=drwxrwxr-x)
	I1114 16:13:58.389704  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1114 16:13:58.389718  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17598-824991
	I1114 16:13:58.389785  881469 main.go:141] libmachine: (newest-cni-161256) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1114 16:13:58.389810  881469 main.go:141] libmachine: (newest-cni-161256) Creating domain...
	I1114 16:13:58.389826  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1114 16:13:58.389844  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home/jenkins
	I1114 16:13:58.389857  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Checking permissions on dir: /home
	I1114 16:13:58.389872  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Skipping /home - not owner
	I1114 16:13:58.391223  881469 main.go:141] libmachine: (newest-cni-161256) define libvirt domain using xml: 
	I1114 16:13:58.391256  881469 main.go:141] libmachine: (newest-cni-161256) <domain type='kvm'>
	I1114 16:13:58.391270  881469 main.go:141] libmachine: (newest-cni-161256)   <name>newest-cni-161256</name>
	I1114 16:13:58.391280  881469 main.go:141] libmachine: (newest-cni-161256)   <memory unit='MiB'>2200</memory>
	I1114 16:13:58.391330  881469 main.go:141] libmachine: (newest-cni-161256)   <vcpu>2</vcpu>
	I1114 16:13:58.391364  881469 main.go:141] libmachine: (newest-cni-161256)   <features>
	I1114 16:13:58.391375  881469 main.go:141] libmachine: (newest-cni-161256)     <acpi/>
	I1114 16:13:58.391383  881469 main.go:141] libmachine: (newest-cni-161256)     <apic/>
	I1114 16:13:58.391392  881469 main.go:141] libmachine: (newest-cni-161256)     <pae/>
	I1114 16:13:58.391406  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.391420  881469 main.go:141] libmachine: (newest-cni-161256)   </features>
	I1114 16:13:58.391434  881469 main.go:141] libmachine: (newest-cni-161256)   <cpu mode='host-passthrough'>
	I1114 16:13:58.391461  881469 main.go:141] libmachine: (newest-cni-161256)   
	I1114 16:13:58.391472  881469 main.go:141] libmachine: (newest-cni-161256)   </cpu>
	I1114 16:13:58.391487  881469 main.go:141] libmachine: (newest-cni-161256)   <os>
	I1114 16:13:58.391502  881469 main.go:141] libmachine: (newest-cni-161256)     <type>hvm</type>
	I1114 16:13:58.391517  881469 main.go:141] libmachine: (newest-cni-161256)     <boot dev='cdrom'/>
	I1114 16:13:58.391528  881469 main.go:141] libmachine: (newest-cni-161256)     <boot dev='hd'/>
	I1114 16:13:58.391538  881469 main.go:141] libmachine: (newest-cni-161256)     <bootmenu enable='no'/>
	I1114 16:13:58.391549  881469 main.go:141] libmachine: (newest-cni-161256)   </os>
	I1114 16:13:58.391561  881469 main.go:141] libmachine: (newest-cni-161256)   <devices>
	I1114 16:13:58.391572  881469 main.go:141] libmachine: (newest-cni-161256)     <disk type='file' device='cdrom'>
	I1114 16:13:58.391609  881469 main.go:141] libmachine: (newest-cni-161256)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/boot2docker.iso'/>
	I1114 16:13:58.391636  881469 main.go:141] libmachine: (newest-cni-161256)       <target dev='hdc' bus='scsi'/>
	I1114 16:13:58.391662  881469 main.go:141] libmachine: (newest-cni-161256)       <readonly/>
	I1114 16:13:58.391680  881469 main.go:141] libmachine: (newest-cni-161256)     </disk>
	I1114 16:13:58.391697  881469 main.go:141] libmachine: (newest-cni-161256)     <disk type='file' device='disk'>
	I1114 16:13:58.391712  881469 main.go:141] libmachine: (newest-cni-161256)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1114 16:13:58.391744  881469 main.go:141] libmachine: (newest-cni-161256)       <source file='/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/newest-cni-161256.rawdisk'/>
	I1114 16:13:58.391763  881469 main.go:141] libmachine: (newest-cni-161256)       <target dev='hda' bus='virtio'/>
	I1114 16:13:58.391776  881469 main.go:141] libmachine: (newest-cni-161256)     </disk>
	I1114 16:13:58.391792  881469 main.go:141] libmachine: (newest-cni-161256)     <interface type='network'>
	I1114 16:13:58.391809  881469 main.go:141] libmachine: (newest-cni-161256)       <source network='mk-newest-cni-161256'/>
	I1114 16:13:58.391822  881469 main.go:141] libmachine: (newest-cni-161256)       <model type='virtio'/>
	I1114 16:13:58.391849  881469 main.go:141] libmachine: (newest-cni-161256)     </interface>
	I1114 16:13:58.391876  881469 main.go:141] libmachine: (newest-cni-161256)     <interface type='network'>
	I1114 16:13:58.391892  881469 main.go:141] libmachine: (newest-cni-161256)       <source network='default'/>
	I1114 16:13:58.391904  881469 main.go:141] libmachine: (newest-cni-161256)       <model type='virtio'/>
	I1114 16:13:58.391918  881469 main.go:141] libmachine: (newest-cni-161256)     </interface>
	I1114 16:13:58.391929  881469 main.go:141] libmachine: (newest-cni-161256)     <serial type='pty'>
	I1114 16:13:58.391939  881469 main.go:141] libmachine: (newest-cni-161256)       <target port='0'/>
	I1114 16:13:58.391951  881469 main.go:141] libmachine: (newest-cni-161256)     </serial>
	I1114 16:13:58.391977  881469 main.go:141] libmachine: (newest-cni-161256)     <console type='pty'>
	I1114 16:13:58.391998  881469 main.go:141] libmachine: (newest-cni-161256)       <target type='serial' port='0'/>
	I1114 16:13:58.392013  881469 main.go:141] libmachine: (newest-cni-161256)     </console>
	I1114 16:13:58.392024  881469 main.go:141] libmachine: (newest-cni-161256)     <rng model='virtio'>
	I1114 16:13:58.392038  881469 main.go:141] libmachine: (newest-cni-161256)       <backend model='random'>/dev/random</backend>
	I1114 16:13:58.392049  881469 main.go:141] libmachine: (newest-cni-161256)     </rng>
	I1114 16:13:58.392061  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.392074  881469 main.go:141] libmachine: (newest-cni-161256)     
	I1114 16:13:58.392086  881469 main.go:141] libmachine: (newest-cni-161256)   </devices>
	I1114 16:13:58.392101  881469 main.go:141] libmachine: (newest-cni-161256) </domain>
	I1114 16:13:58.392123  881469 main.go:141] libmachine: (newest-cni-161256) 
	I1114 16:13:58.397370  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:8b:ec:96 in network default
	I1114 16:13:58.398066  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring networks are active...
	I1114 16:13:58.398113  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:13:58.398797  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring network default is active
	I1114 16:13:58.399287  881469 main.go:141] libmachine: (newest-cni-161256) Ensuring network mk-newest-cni-161256 is active
	I1114 16:13:58.399958  881469 main.go:141] libmachine: (newest-cni-161256) Getting domain xml...
	I1114 16:13:58.400849  881469 main.go:141] libmachine: (newest-cni-161256) Creating domain...
	I1114 16:13:59.726283  881469 main.go:141] libmachine: (newest-cni-161256) Waiting to get IP...
	I1114 16:13:59.727449  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:13:59.727962  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:13:59.727986  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:13:59.727939  881491 retry.go:31] will retry after 279.361106ms: waiting for machine to come up
	I1114 16:14:00.009714  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.010197  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.010237  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.010159  881491 retry.go:31] will retry after 359.592157ms: waiting for machine to come up
	I1114 16:14:00.372007  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.372590  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.372624  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.372515  881491 retry.go:31] will retry after 324.730593ms: waiting for machine to come up
	I1114 16:14:00.698994  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:00.699575  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:00.699610  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:00.699489  881491 retry.go:31] will retry after 476.141432ms: waiting for machine to come up
	I1114 16:14:01.177324  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:01.177753  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:01.177783  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:01.177714  881491 retry.go:31] will retry after 693.627681ms: waiting for machine to come up
	I1114 16:14:01.872724  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:01.873311  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:01.873346  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:01.873237  881491 retry.go:31] will retry after 922.207125ms: waiting for machine to come up
	I1114 16:14:02.796995  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:02.797487  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:02.797515  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:02.797447  881491 retry.go:31] will retry after 828.947009ms: waiting for machine to come up
	I1114 16:14:03.627753  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:03.628173  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:03.628210  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:03.628118  881491 retry.go:31] will retry after 997.915404ms: waiting for machine to come up
	I1114 16:14:04.627128  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:04.627568  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:04.627602  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:04.627510  881491 retry.go:31] will retry after 1.497303924s: waiting for machine to come up
	I1114 16:14:06.126245  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:06.126708  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:06.126773  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:06.126683  881491 retry.go:31] will retry after 2.041273523s: waiting for machine to come up
	I1114 16:14:08.169598  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:08.170190  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:08.170229  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:08.170121  881491 retry.go:31] will retry after 1.842095296s: waiting for machine to come up
	I1114 16:14:10.015052  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:10.015611  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:10.015646  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:10.015549  881491 retry.go:31] will retry after 2.927670132s: waiting for machine to come up
	I1114 16:14:12.944720  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:12.945324  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:12.945360  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:12.945263  881491 retry.go:31] will retry after 3.702057643s: waiting for machine to come up
	I1114 16:14:16.650490  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:16.650958  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find current IP address of domain newest-cni-161256 in network mk-newest-cni-161256
	I1114 16:14:16.650990  881469 main.go:141] libmachine: (newest-cni-161256) DBG | I1114 16:14:16.650908  881491 retry.go:31] will retry after 5.604460167s: waiting for machine to come up
	I1114 16:14:22.258010  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.258475  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has current primary IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.258533  881469 main.go:141] libmachine: (newest-cni-161256) Found IP for machine: 192.168.72.15
	I1114 16:14:22.258560  881469 main.go:141] libmachine: (newest-cni-161256) Reserving static IP address...
	I1114 16:14:22.258936  881469 main.go:141] libmachine: (newest-cni-161256) DBG | unable to find host DHCP lease matching {name: "newest-cni-161256", mac: "52:54:00:06:29:44", ip: "192.168.72.15"} in network mk-newest-cni-161256
	I1114 16:14:22.344546  881469 main.go:141] libmachine: (newest-cni-161256) Reserved static IP address: 192.168.72.15
	I1114 16:14:22.344599  881469 main.go:141] libmachine: (newest-cni-161256) Waiting for SSH to be available...
	I1114 16:14:22.344611  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Getting to WaitForSSH function...
	I1114 16:14:22.347942  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.348375  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.348409  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.348585  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using SSH client type: external
	I1114 16:14:22.348616  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa (-rw-------)
	I1114 16:14:22.348666  881469 main.go:141] libmachine: (newest-cni-161256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 16:14:22.348685  881469 main.go:141] libmachine: (newest-cni-161256) DBG | About to run SSH command:
	I1114 16:14:22.348794  881469 main.go:141] libmachine: (newest-cni-161256) DBG | exit 0
	I1114 16:14:22.444878  881469 main.go:141] libmachine: (newest-cni-161256) DBG | SSH cmd err, output: <nil>: 
	I1114 16:14:22.445251  881469 main.go:141] libmachine: (newest-cni-161256) KVM machine creation complete!
	I1114 16:14:22.445546  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:14:22.446255  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:22.446483  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:22.446698  881469 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1114 16:14:22.446723  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetState
	I1114 16:14:22.448178  881469 main.go:141] libmachine: Detecting operating system of created instance...
	I1114 16:14:22.448199  881469 main.go:141] libmachine: Waiting for SSH to be available...
	I1114 16:14:22.448209  881469 main.go:141] libmachine: Getting to WaitForSSH function...
	I1114 16:14:22.448240  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.451143  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.451592  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.451626  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.451815  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.452017  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.452188  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.452378  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.452632  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.453178  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.453198  881469 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1114 16:14:22.584113  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 16:14:22.584150  881469 main.go:141] libmachine: Detecting the provisioner...
	I1114 16:14:22.584162  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.587100  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.587496  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.587533  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.587647  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.587854  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.588086  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.588282  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.588472  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.588880  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.588894  881469 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1114 16:14:22.713853  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g9cb9327-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1114 16:14:22.714001  881469 main.go:141] libmachine: found compatible host: buildroot
	I1114 16:14:22.714021  881469 main.go:141] libmachine: Provisioning with buildroot...
	I1114 16:14:22.714035  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:22.714353  881469 buildroot.go:166] provisioning hostname "newest-cni-161256"
	I1114 16:14:22.714397  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:22.714634  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.717497  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.717871  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.717902  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.718002  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.718218  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.718401  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.718569  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.718809  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.719156  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.719179  881469 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-161256 && echo "newest-cni-161256" | sudo tee /etc/hostname
	I1114 16:14:22.862571  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-161256
	
	I1114 16:14:22.862597  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:22.865536  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.865784  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:22.865817  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:22.866066  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:22.866276  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.866445  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:22.866579  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:22.866744  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:22.867182  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:22.867203  881469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-161256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-161256/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-161256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 16:14:23.001359  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 16:14:23.001407  881469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 16:14:23.001464  881469 buildroot.go:174] setting up certificates
	I1114 16:14:23.001485  881469 provision.go:83] configureAuth start
	I1114 16:14:23.001511  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetMachineName
	I1114 16:14:23.001901  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.004872  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.005238  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.005269  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.005429  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.007776  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.008237  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.008260  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.008470  881469 provision.go:138] copyHostCerts
	I1114 16:14:23.008534  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 16:14:23.008559  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 16:14:23.008659  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 16:14:23.008811  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 16:14:23.008830  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 16:14:23.008881  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 16:14:23.008960  881469 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 16:14:23.008970  881469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 16:14:23.009025  881469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 16:14:23.009094  881469 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.newest-cni-161256 san=[192.168.72.15 192.168.72.15 localhost 127.0.0.1 minikube newest-cni-161256]
	I1114 16:14:23.079504  881469 provision.go:172] copyRemoteCerts
	I1114 16:14:23.079572  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 16:14:23.079600  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.082584  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.082929  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.082976  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.083207  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.083372  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.083537  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.083692  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.179440  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 16:14:23.202630  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 16:14:23.226109  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 16:14:23.249807  881469 provision.go:86] duration metric: configureAuth took 248.303658ms
	I1114 16:14:23.249837  881469 buildroot.go:189] setting minikube options for container-runtime
	I1114 16:14:23.250074  881469 config.go:182] Loaded profile config "newest-cni-161256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:14:23.250179  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.253266  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.253742  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.253777  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.254015  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.254251  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.254401  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.254555  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.254745  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:23.255215  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:23.255246  881469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 16:14:23.578903  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 16:14:23.578934  881469 main.go:141] libmachine: Checking connection to Docker...
	I1114 16:14:23.578944  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetURL
	I1114 16:14:23.580328  881469 main.go:141] libmachine: (newest-cni-161256) DBG | Using libvirt version 6000000
	I1114 16:14:23.583089  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.583490  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.583521  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.583676  881469 main.go:141] libmachine: Docker is up and running!
	I1114 16:14:23.583692  881469 main.go:141] libmachine: Reticulating splines...
	I1114 16:14:23.583699  881469 client.go:171] LocalClient.Create took 25.702286469s
	I1114 16:14:23.583722  881469 start.go:167] duration metric: libmachine.API.Create for "newest-cni-161256" took 25.702360903s
	I1114 16:14:23.583734  881469 start.go:300] post-start starting for "newest-cni-161256" (driver="kvm2")
	I1114 16:14:23.583742  881469 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 16:14:23.583775  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.584090  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 16:14:23.584123  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.586647  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.586970  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.587000  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.587141  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.587285  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.587384  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.587503  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.678050  881469 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 16:14:23.682156  881469 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 16:14:23.682188  881469 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 16:14:23.682263  881469 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 16:14:23.682436  881469 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 16:14:23.682596  881469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 16:14:23.690851  881469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 16:14:23.716446  881469 start.go:303] post-start completed in 132.696208ms
	I1114 16:14:23.716505  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetConfigRaw
	I1114 16:14:23.717172  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.719919  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.720304  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.720331  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.720639  881469 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/config.json ...
	I1114 16:14:23.720874  881469 start.go:128] duration metric: createHost completed in 25.857531002s
	I1114 16:14:23.720903  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.723370  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.723733  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.723760  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.723892  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.724103  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.724271  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.724405  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.724612  881469 main.go:141] libmachine: Using SSH client type: native
	I1114 16:14:23.724962  881469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1114 16:14:23.724976  881469 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 16:14:23.849570  881469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699978463.832211650
	
	I1114 16:14:23.849596  881469 fix.go:206] guest clock: 1699978463.832211650
	I1114 16:14:23.849606  881469 fix.go:219] Guest: 2023-11-14 16:14:23.83221165 +0000 UTC Remote: 2023-11-14 16:14:23.720887486 +0000 UTC m=+25.991128135 (delta=111.324164ms)
	I1114 16:14:23.849673  881469 fix.go:190] guest clock delta is within tolerance: 111.324164ms
	I1114 16:14:23.849681  881469 start.go:83] releasing machines lock for "newest-cni-161256", held for 25.986446906s
	I1114 16:14:23.849727  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.850024  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:23.853811  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.854242  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.854267  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.854457  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.854929  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.855189  881469 main.go:141] libmachine: (newest-cni-161256) Calling .DriverName
	I1114 16:14:23.855341  881469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 16:14:23.855383  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.855472  881469 ssh_runner.go:195] Run: cat /version.json
	I1114 16:14:23.855501  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHHostname
	I1114 16:14:23.858531  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.858707  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.858984  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.859019  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:23.859041  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.859056  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:23.859226  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.859241  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHPort
	I1114 16:14:23.859435  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.859451  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHKeyPath
	I1114 16:14:23.859662  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.859667  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetSSHUsername
	I1114 16:14:23.859823  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.859823  881469 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/newest-cni-161256/id_rsa Username:docker}
	I1114 16:14:23.947110  881469 ssh_runner.go:195] Run: systemctl --version
	I1114 16:14:23.975201  881469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 16:14:24.146755  881469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 16:14:24.153898  881469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 16:14:24.153973  881469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 16:14:24.170773  881469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 16:14:24.170798  881469 start.go:472] detecting cgroup driver to use...
	I1114 16:14:24.170898  881469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 16:14:24.184315  881469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 16:14:24.195742  881469 docker.go:203] disabling cri-docker service (if available) ...
	I1114 16:14:24.195812  881469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 16:14:24.208418  881469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 16:14:24.220829  881469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 16:14:24.326701  881469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 16:14:24.448062  881469 docker.go:219] disabling docker service ...
	I1114 16:14:24.448137  881469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 16:14:24.461347  881469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 16:14:24.474044  881469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 16:14:24.588367  881469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 16:14:24.706443  881469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 16:14:24.718562  881469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 16:14:24.736225  881469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 16:14:24.736304  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.745622  881469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 16:14:24.745695  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.754757  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.763742  881469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 16:14:24.773060  881469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 16:14:24.782622  881469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 16:14:24.790914  881469 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 16:14:24.790977  881469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 16:14:24.804357  881469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 16:14:24.815049  881469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 16:14:24.928182  881469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 16:14:25.100061  881469 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 16:14:25.100131  881469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 16:14:25.105250  881469 start.go:540] Will wait 60s for crictl version
	I1114 16:14:25.105312  881469 ssh_runner.go:195] Run: which crictl
	I1114 16:14:25.109193  881469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 16:14:25.154864  881469 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 16:14:25.154991  881469 ssh_runner.go:195] Run: crio --version
	I1114 16:14:25.203888  881469 ssh_runner.go:195] Run: crio --version
	I1114 16:14:25.253040  881469 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 16:14:25.254574  881469 main.go:141] libmachine: (newest-cni-161256) Calling .GetIP
	I1114 16:14:25.257607  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:25.258099  881469 main.go:141] libmachine: (newest-cni-161256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:29:44", ip: ""} in network mk-newest-cni-161256: {Iface:virbr1 ExpiryTime:2023-11-14 17:14:14 +0000 UTC Type:0 Mac:52:54:00:06:29:44 Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:newest-cni-161256 Clientid:01:52:54:00:06:29:44}
	I1114 16:14:25.258150  881469 main.go:141] libmachine: (newest-cni-161256) DBG | domain newest-cni-161256 has defined IP address 192.168.72.15 and MAC address 52:54:00:06:29:44 in network mk-newest-cni-161256
	I1114 16:14:25.258401  881469 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1114 16:14:25.264052  881469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 16:14:25.277627  881469 localpath.go:92] copying /home/jenkins/minikube-integration/17598-824991/.minikube/client.crt -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/client.crt
	I1114 16:14:25.277799  881469 localpath.go:117] copying /home/jenkins/minikube-integration/17598-824991/.minikube/client.key -> /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/newest-cni-161256/client.key
	I1114 16:14:25.279677  881469 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:54:33 UTC, ends at Tue 2023-11-14 16:14:27 UTC. --
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.780391398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978467780374700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=b9feaba9-9ce7-4b99-8a17-e2c4715b1cf1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.781060995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=85b18738-28b3-423c-b37e-dfbf0c21eee6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.781143678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=85b18738-28b3-423c-b37e-dfbf0c21eee6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.781387362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b,PodSandboxId:152ae7f3a0d6a4b08d01a8d537ca3774f4993a1f42189ee162edb9a1495629af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699977615336575909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a23261de-849c-41b5-9e5f-7230461b67d8,},Annotations:map[string]string{io.kubernetes.container.hash: 152bd272,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874,PodSandboxId:7bbe0277a33b36bc9f456a2e0cb847888b9feae8a79edc72a40aba69e04cb264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699977615185050571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9nc8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19df2d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb,PodSandboxId:bf58e53fbfda07749e339691ea969198ace26d3bf1ed7e35dacf163873c08f98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699977614630486862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-khvq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 30205a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30,PodSandboxId:6d0dbe1c66e6393f6b75ed2c27b7b8ed867ac819bec76e15928faeecfd401bd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699977590951450497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
efba73e1c365132017949c57e903b533,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a,PodSandboxId:bd0e50c61e6d5b1f740e6201a8d010b8dc09bcdbb86c6bbc98c010b554e31d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699977590694018311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b508739bef
8b7b42857234904491d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9654ba19,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,PodSandboxId:5383ecf8d0030486809a018ef8c8befc19ce84a1f50d5ee9b451eedc1728dd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699977590483673939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e7a8cdb1abe81115f9f4ddf44f4541,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6c681eab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,PodSandboxId:cf0343989e81eca713d2f60761c776441139af451ec7a1fc43768e47962441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699977590284795211,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b62aaaa08313b0380ea33995759132a,},An
notations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=85b18738-28b3-423c-b37e-dfbf0c21eee6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.830491621Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ffb580b1-d42f-4313-966d-af47232a6910 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.830637578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ffb580b1-d42f-4313-966d-af47232a6910 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.833231501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=38d33ec9-81ba-4680-8e02-f7c37c2f39d0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.833545753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978467833535397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=38d33ec9-81ba-4680-8e02-f7c37c2f39d0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.834233094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0cbd0e87-ad62-4a6f-b30e-65000674c85f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.834275964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0cbd0e87-ad62-4a6f-b30e-65000674c85f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.838160859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b,PodSandboxId:152ae7f3a0d6a4b08d01a8d537ca3774f4993a1f42189ee162edb9a1495629af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699977615336575909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a23261de-849c-41b5-9e5f-7230461b67d8,},Annotations:map[string]string{io.kubernetes.container.hash: 152bd272,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874,PodSandboxId:7bbe0277a33b36bc9f456a2e0cb847888b9feae8a79edc72a40aba69e04cb264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699977615185050571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9nc8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19df2d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb,PodSandboxId:bf58e53fbfda07749e339691ea969198ace26d3bf1ed7e35dacf163873c08f98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699977614630486862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-khvq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 30205a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30,PodSandboxId:6d0dbe1c66e6393f6b75ed2c27b7b8ed867ac819bec76e15928faeecfd401bd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699977590951450497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
efba73e1c365132017949c57e903b533,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a,PodSandboxId:bd0e50c61e6d5b1f740e6201a8d010b8dc09bcdbb86c6bbc98c010b554e31d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699977590694018311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b508739bef
8b7b42857234904491d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9654ba19,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,PodSandboxId:5383ecf8d0030486809a018ef8c8befc19ce84a1f50d5ee9b451eedc1728dd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699977590483673939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e7a8cdb1abe81115f9f4ddf44f4541,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6c681eab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,PodSandboxId:cf0343989e81eca713d2f60761c776441139af451ec7a1fc43768e47962441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699977590284795211,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b62aaaa08313b0380ea33995759132a,},An
notations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0cbd0e87-ad62-4a6f-b30e-65000674c85f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.890926585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a707a7cc-c18e-4f76-a17b-e55a717adc35 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.891284383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a707a7cc-c18e-4f76-a17b-e55a717adc35 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.893530060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=19a5771b-a446-4df0-baaf-b77076a0e142 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.894170088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978467894150400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=19a5771b-a446-4df0-baaf-b77076a0e142 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.895424823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9b547c10-c8bf-408d-b0c7-a9852377211b name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.895511917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9b547c10-c8bf-408d-b0c7-a9852377211b name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.895714170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b,PodSandboxId:152ae7f3a0d6a4b08d01a8d537ca3774f4993a1f42189ee162edb9a1495629af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699977615336575909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a23261de-849c-41b5-9e5f-7230461b67d8,},Annotations:map[string]string{io.kubernetes.container.hash: 152bd272,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874,PodSandboxId:7bbe0277a33b36bc9f456a2e0cb847888b9feae8a79edc72a40aba69e04cb264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699977615185050571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9nc8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19df2d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb,PodSandboxId:bf58e53fbfda07749e339691ea969198ace26d3bf1ed7e35dacf163873c08f98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699977614630486862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-khvq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 30205a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30,PodSandboxId:6d0dbe1c66e6393f6b75ed2c27b7b8ed867ac819bec76e15928faeecfd401bd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699977590951450497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
efba73e1c365132017949c57e903b533,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a,PodSandboxId:bd0e50c61e6d5b1f740e6201a8d010b8dc09bcdbb86c6bbc98c010b554e31d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699977590694018311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b508739bef
8b7b42857234904491d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9654ba19,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,PodSandboxId:5383ecf8d0030486809a018ef8c8befc19ce84a1f50d5ee9b451eedc1728dd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699977590483673939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e7a8cdb1abe81115f9f4ddf44f4541,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6c681eab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,PodSandboxId:cf0343989e81eca713d2f60761c776441139af451ec7a1fc43768e47962441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699977590284795211,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b62aaaa08313b0380ea33995759132a,},An
notations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9b547c10-c8bf-408d-b0c7-a9852377211b name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.935250699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=709b0910-47a8-46cb-8a38-daf5da097012 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.935311518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=709b0910-47a8-46cb-8a38-daf5da097012 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.936440571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=23116f8e-d62d-4663-97fb-fc2c388697dd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.936800283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978467936787967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=23116f8e-d62d-4663-97fb-fc2c388697dd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.937463525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=be6dbd6c-07b5-4a60-93ec-44ba7b789d30 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.937513106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=be6dbd6c-07b5-4a60-93ec-44ba7b789d30 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:14:27 no-preload-490998 crio[726]: time="2023-11-14 16:14:27.937743704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b,PodSandboxId:152ae7f3a0d6a4b08d01a8d537ca3774f4993a1f42189ee162edb9a1495629af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699977615336575909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a23261de-849c-41b5-9e5f-7230461b67d8,},Annotations:map[string]string{io.kubernetes.container.hash: 152bd272,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874,PodSandboxId:7bbe0277a33b36bc9f456a2e0cb847888b9feae8a79edc72a40aba69e04cb264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699977615185050571,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9nc8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19df2d35,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb,PodSandboxId:bf58e53fbfda07749e339691ea969198ace26d3bf1ed7e35dacf163873c08f98,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699977614630486862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-khvq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 30205a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30,PodSandboxId:6d0dbe1c66e6393f6b75ed2c27b7b8ed867ac819bec76e15928faeecfd401bd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699977590951450497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
efba73e1c365132017949c57e903b533,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a,PodSandboxId:bd0e50c61e6d5b1f740e6201a8d010b8dc09bcdbb86c6bbc98c010b554e31d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699977590694018311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b508739bef
8b7b42857234904491d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9654ba19,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b,PodSandboxId:5383ecf8d0030486809a018ef8c8befc19ce84a1f50d5ee9b451eedc1728dd63,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699977590483673939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e7a8cdb1abe81115f9f4ddf44f4541,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6c681eab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4,PodSandboxId:cf0343989e81eca713d2f60761c776441139af451ec7a1fc43768e47962441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699977590284795211,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-490998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b62aaaa08313b0380ea33995759132a,},An
notations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=be6dbd6c-07b5-4a60-93ec-44ba7b789d30 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c16c8a8b7d924       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   152ae7f3a0d6a       storage-provisioner
	206abe3a8e40b       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   14 minutes ago      Running             kube-proxy                0                   7bbe0277a33b3       kube-proxy-9nc8j
	a1d2a3ee458c4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   bf58e53fbfda0       coredns-5dd5756b68-khvq4
	2ff2b10d3fae8       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   14 minutes ago      Running             kube-scheduler            2                   6d0dbe1c66e63       kube-scheduler-no-preload-490998
	c5e3f1f96b48b       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   14 minutes ago      Running             kube-apiserver            2                   bd0e50c61e6d5       kube-apiserver-no-preload-490998
	52c9022a0dbcb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   5383ecf8d0030       etcd-no-preload-490998
	e7ca7216e4f95       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   14 minutes ago      Running             kube-controller-manager   2                   cf0343989e81e       kube-controller-manager-no-preload-490998
	
	* 
	* ==> coredns [a1d2a3ee458c476b9ea7aa588dbe8afd406f1be312407e640522abca70a936cb] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-490998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-490998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=no-preload-490998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_59_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:59:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-490998
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 16:14:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 16:10:31 +0000   Tue, 14 Nov 2023 15:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 16:10:31 +0000   Tue, 14 Nov 2023 15:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 16:10:31 +0000   Tue, 14 Nov 2023 15:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 16:10:31 +0000   Tue, 14 Nov 2023 15:59:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.251
	  Hostname:    no-preload-490998
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3b444f88fbc44fea26e699ddb0dadbc
	  System UUID:                e3b444f8-8fbc-44fe-a26e-699ddb0dadbc
	  Boot ID:                    6de318c0-2cd2-4464-a975-083168e9b66f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-khvq4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-490998                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-490998             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-490998    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-9nc8j                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-490998             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-cljst              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-490998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-490998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-490998 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-490998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-490998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-490998 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m                kubelet          Node no-preload-490998 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m                kubelet          Node no-preload-490998 status is now: NodeReady
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-490998 event: Registered Node no-preload-490998 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov14 15:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075571] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.751720] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.347618] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150840] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.536651] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.228115] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.149748] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.167735] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.124031] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.264372] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[Nov14 15:55] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[ +19.347768] kauditd_printk_skb: 29 callbacks suppressed
	[Nov14 15:59] systemd-fstab-generator[3886]: Ignoring "noauto" for root device
	[  +9.316479] systemd-fstab-generator[4209]: Ignoring "noauto" for root device
	[Nov14 16:00] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [52c9022a0dbcb77ef03fcf18bef7b542075c7f006cf0acfc9b4cf9bcae2bc44b] <==
	* {"level":"info","ts":"2023-11-14T15:59:52.572719Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"439bb489ce44e0e1","initial-advertise-peer-urls":["https://192.168.50.251:2380"],"listen-peer-urls":["https://192.168.50.251:2380"],"advertise-client-urls":["https://192.168.50.251:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.251:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-14T15:59:52.573834Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-14T15:59:52.572136Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.251:2380"}
	{"level":"info","ts":"2023-11-14T15:59:52.574832Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.251:2380"}
	{"level":"info","ts":"2023-11-14T15:59:53.12094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-14T15:59:53.121104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-14T15:59:53.121151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 received MsgPreVoteResp from 439bb489ce44e0e1 at term 1"}
	{"level":"info","ts":"2023-11-14T15:59:53.121186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-14T15:59:53.12122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 received MsgVoteResp from 439bb489ce44e0e1 at term 2"}
	{"level":"info","ts":"2023-11-14T15:59:53.121253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"439bb489ce44e0e1 became leader at term 2"}
	{"level":"info","ts":"2023-11-14T15:59:53.121279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 439bb489ce44e0e1 elected leader 439bb489ce44e0e1 at term 2"}
	{"level":"info","ts":"2023-11-14T15:59:53.122758Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"439bb489ce44e0e1","local-member-attributes":"{Name:no-preload-490998 ClientURLs:[https://192.168.50.251:2379]}","request-path":"/0/members/439bb489ce44e0e1/attributes","cluster-id":"dd9b68cf7bac6d9","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T15:59:53.123124Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:59:53.124093Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T15:59:53.124868Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.251:2379"}
	{"level":"info","ts":"2023-11-14T15:59:53.125026Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T15:59:53.125064Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T15:59:53.125399Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:59:53.125942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T15:59:53.126641Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dd9b68cf7bac6d9","local-member-id":"439bb489ce44e0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:59:53.126744Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T15:59:53.126782Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T16:09:53.419683Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2023-11-14T16:09:53.422277Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.180287ms","hash":2088075100}
	{"level":"info","ts":"2023-11-14T16:09:53.422361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2088075100,"revision":714,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  16:14:28 up 20 min,  0 users,  load average: 0.11, 0.27, 0.25
	Linux no-preload-490998 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c5e3f1f96b48b7576e85abcef31e6dd0a9a0926286e58aa6d5e3f36abfce1b7a] <==
	* W1114 16:09:55.886172       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:09:55.886332       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:09:55.886368       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:09:55.886503       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:09:55.886567       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:09:55.887913       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:10:54.761557       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:10:55.887250       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:10:55.887522       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:10:55.887574       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:10:55.888570       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:10:55.888630       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:10:55.888641       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:11:54.762069       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1114 16:12:54.761472       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1114 16:12:55.888333       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:12:55.888471       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:12:55.888517       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1114 16:12:55.889479       1 handler_proxy.go:93] no RequestInfo found in the context
	E1114 16:12:55.889525       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1114 16:12:55.889536       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:13:54.761162       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [e7ca7216e4f95494c88301b8e896a0893c55b1eb0c5418c54b868b22e21da2c4] <==
	* I1114 16:08:41.832501       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:09:11.368754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:09:11.844839       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:09:41.374188       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:09:41.854405       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:10:11.390681       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:10:11.866869       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:10:41.396807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:10:41.876199       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:11:05.580423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="146.907µs"
	E1114 16:11:11.402392       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:11:11.885087       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1114 16:11:18.580149       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="90.114µs"
	E1114 16:11:41.408464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:11:41.894639       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:12:11.419536       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:12:11.903643       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:12:41.425301       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:12:41.912749       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:13:11.430636       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:13:11.922147       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:13:41.435733       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:13:41.930788       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1114 16:14:11.449925       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1114 16:14:11.940257       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [206abe3a8e40bd37b180577677a86ac6e91cb6b9f6cceb74281791e37c683874] <==
	* I1114 16:00:15.544834       1 server_others.go:69] "Using iptables proxy"
	I1114 16:00:15.655801       1 node.go:141] Successfully retrieved node IP: 192.168.50.251
	I1114 16:00:15.704386       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 16:00:15.704459       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 16:00:15.707500       1 server_others.go:152] "Using iptables Proxier"
	I1114 16:00:15.707652       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 16:00:15.707870       1 server.go:846] "Version info" version="v1.28.3"
	I1114 16:00:15.707884       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 16:00:15.708887       1 config.go:188] "Starting service config controller"
	I1114 16:00:15.709166       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 16:00:15.709228       1 config.go:97] "Starting endpoint slice config controller"
	I1114 16:00:15.709234       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 16:00:15.710115       1 config.go:315] "Starting node config controller"
	I1114 16:00:15.710154       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 16:00:15.809705       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 16:00:15.809770       1 shared_informer.go:318] Caches are synced for service config
	I1114 16:00:15.815309       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2ff2b10d3fae869d74eb9a1fa505169dd4039bd11805a60115000ca5f1404a30] <==
	* W1114 15:59:54.958638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:54.958674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1114 15:59:54.958744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:59:54.958756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 15:59:54.958942       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 15:59:54.959038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 15:59:55.819846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 15:59:55.819937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1114 15:59:55.838323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 15:59:55.838351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1114 15:59:55.882261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 15:59:55.882352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 15:59:55.891560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:55.891629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 15:59:55.913017       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 15:59:55.913070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 15:59:56.066763       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 15:59:56.066899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1114 15:59:56.085479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:56.085603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1114 15:59:56.138358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:59:56.138499       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 15:59:56.350026       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 15:59:56.350110       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1114 15:59:58.627477       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:54:33 UTC, ends at Tue 2023-11-14 16:14:28 UTC. --
	Nov 14 16:11:41 no-preload-490998 kubelet[4216]: E1114 16:11:41.562787    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:11:55 no-preload-490998 kubelet[4216]: E1114 16:11:55.562738    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:11:58 no-preload-490998 kubelet[4216]: E1114 16:11:58.674374    4216 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:11:58 no-preload-490998 kubelet[4216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:11:58 no-preload-490998 kubelet[4216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:11:58 no-preload-490998 kubelet[4216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:12:09 no-preload-490998 kubelet[4216]: E1114 16:12:09.564218    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:12:24 no-preload-490998 kubelet[4216]: E1114 16:12:24.563808    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:12:35 no-preload-490998 kubelet[4216]: E1114 16:12:35.563023    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:12:50 no-preload-490998 kubelet[4216]: E1114 16:12:50.563798    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:12:58 no-preload-490998 kubelet[4216]: E1114 16:12:58.675692    4216 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:12:58 no-preload-490998 kubelet[4216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:12:58 no-preload-490998 kubelet[4216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:12:58 no-preload-490998 kubelet[4216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:13:05 no-preload-490998 kubelet[4216]: E1114 16:13:05.563363    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:13:18 no-preload-490998 kubelet[4216]: E1114 16:13:18.563053    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:13:32 no-preload-490998 kubelet[4216]: E1114 16:13:32.564112    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:13:43 no-preload-490998 kubelet[4216]: E1114 16:13:43.563095    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:13:57 no-preload-490998 kubelet[4216]: E1114 16:13:57.562074    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:13:58 no-preload-490998 kubelet[4216]: E1114 16:13:58.678088    4216 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 16:13:58 no-preload-490998 kubelet[4216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 16:13:58 no-preload-490998 kubelet[4216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 16:13:58 no-preload-490998 kubelet[4216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 16:14:12 no-preload-490998 kubelet[4216]: E1114 16:14:12.564100    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	Nov 14 16:14:25 no-preload-490998 kubelet[4216]: E1114 16:14:25.563459    4216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cljst" podUID="3e8d5772-4204-44cb-9e85-41081d8a6510"
	
	* 
	* ==> storage-provisioner [c16c8a8b7d924e0b9acd5bbc7e8ce58e0103be6bd50bebdb218a76fa1146bc2b] <==
	* I1114 16:00:15.520259       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 16:00:15.534900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 16:00:15.535343       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 16:00:15.550258       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 16:00:15.550701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-490998_fe5af1c2-ba49-4b80-8dd0-8ceb66467d8d!
	I1114 16:00:15.556584       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbb40898-897a-4836-aaa9-fe3ebbe609bf", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-490998_fe5af1c2-ba49-4b80-8dd0-8ceb66467d8d became leader
	I1114 16:00:15.651102       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-490998_fe5af1c2-ba49-4b80-8dd0-8ceb66467d8d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-490998 -n no-preload-490998
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-490998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-cljst
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-490998 describe pod metrics-server-57f55c9bc5-cljst
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-490998 describe pod metrics-server-57f55c9bc5-cljst: exit status 1 (86.298132ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-cljst" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-490998 describe pod metrics-server-57f55c9bc5-cljst: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (308.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (238.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1114 16:10:55.158041  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 16:11:27.620717  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 16:11:34.577647  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 16:11:36.377362  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 16:12:21.221555  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 16:13:22.912700  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 16:13:48.692345  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 16:13:52.668895  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 16:13:53.607576  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-842105 -n old-k8s-version-842105
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-14 16:13:53.935382728 +0000 UTC m=+5701.465567682
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-842105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-842105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.231µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-842105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-842105 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-842105 logs -n 25: (1.541247324s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-492851 sudo                          | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-492851                               | custom-flannel-492851        | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-331502 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:45 UTC |
	|         | disable-driver-mounts-331502                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:45 UTC | 14 Nov 23 15:47 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-490998             | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-279880            | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-842105        | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC | 14 Nov 23 15:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-529430  | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC | 14 Nov 23 15:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:47 UTC |                     |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-490998                  | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-490998                                   | no-preload-490998            | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-279880                 | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-279880                                  | embed-certs-279880           | jenkins | v1.32.0 | 14 Nov 23 15:48 UTC | 14 Nov 23 15:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-842105             | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-842105                              | old-k8s-version-842105       | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 16:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-529430       | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-529430 | jenkins | v1.32.0 | 14 Nov 23 15:49 UTC | 14 Nov 23 15:59 UTC |
	|         | default-k8s-diff-port-529430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 15:49:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 15:49:49.997953  876668 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:49:49.998137  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998147  876668 out.go:309] Setting ErrFile to fd 2...
	I1114 15:49:49.998152  876668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:49:49.998369  876668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:49:49.998978  876668 out.go:303] Setting JSON to false
	I1114 15:49:50.000072  876668 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":45142,"bootTime":1699931848,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:49:50.000141  876668 start.go:138] virtualization: kvm guest
	I1114 15:49:50.002690  876668 out.go:177] * [default-k8s-diff-port-529430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:49:50.004392  876668 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:49:50.004396  876668 notify.go:220] Checking for updates...
	I1114 15:49:50.006193  876668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:49:50.007844  876668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:49:50.009232  876668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:49:50.010572  876668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:49:50.011857  876668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:49:50.013604  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:49:50.014059  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.014149  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.028903  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I1114 15:49:50.029290  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.029869  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.029892  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.030244  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.030474  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.030753  876668 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:49:50.031049  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:49:50.031096  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:49:50.045696  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I1114 15:49:50.046117  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:49:50.046625  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:49:50.046658  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:49:50.047069  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:49:50.047303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:49:50.082731  876668 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 15:49:50.084362  876668 start.go:298] selected driver: kvm2
	I1114 15:49:50.084384  876668 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.084517  876668 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:49:50.085533  876668 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.085625  876668 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 15:49:50.100834  876668 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 15:49:50.101226  876668 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 15:49:50.101308  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:49:50.101328  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:49:50.101342  876668 start_flags.go:323] config:
	{Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-52943
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:49:50.101540  876668 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 15:49:50.103413  876668 out.go:177] * Starting control plane node default-k8s-diff-port-529430 in cluster default-k8s-diff-port-529430
	I1114 15:49:49.196989  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:52.269051  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:49:50.104763  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:49:50.104815  876668 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 15:49:50.104835  876668 cache.go:56] Caching tarball of preloaded images
	I1114 15:49:50.104932  876668 preload.go:174] Found /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1114 15:49:50.104946  876668 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 15:49:50.105089  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:49:50.105307  876668 start.go:365] acquiring machines lock for default-k8s-diff-port-529430: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:49:58.349061  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:01.421017  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:07.501030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:10.573058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:16.653093  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:19.725040  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:25.805047  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:28.877039  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:34.957084  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:38.029008  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:44.109068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:47.181018  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:53.261065  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:50:56.333048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:02.413048  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:05.485063  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:11.565034  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:14.636996  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:20.717050  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:23.789097  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:29.869058  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:32.941066  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:39.021029  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:42.093064  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:48.173074  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:51.245007  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:51:57.325014  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:00.397111  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:06.477052  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:09.549016  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:15.629105  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:18.701000  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:24.781004  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:27.853046  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:33.933030  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:37.005067  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:43.085068  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:46.157044  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:52.237056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:52:55.309080  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:01.389056  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:04.461005  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:10.541083  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:13.613033  876065 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.251:22: connect: no route to host
	I1114 15:53:16.617368  876220 start.go:369] acquired machines lock for "embed-certs-279880" in 4m25.691009916s
	I1114 15:53:16.617492  876220 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:16.617500  876220 fix.go:54] fixHost starting: 
	I1114 15:53:16.617993  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:16.618029  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:16.633223  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I1114 15:53:16.633787  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:16.634385  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:53:16.634412  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:16.634743  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:16.634958  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:16.635120  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:53:16.636933  876220 fix.go:102] recreateIfNeeded on embed-certs-279880: state=Stopped err=<nil>
	I1114 15:53:16.636967  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	W1114 15:53:16.637164  876220 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:16.638727  876220 out.go:177] * Restarting existing kvm2 VM for "embed-certs-279880" ...
	I1114 15:53:16.615062  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:16.615116  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:53:16.617147  876065 machine.go:91] provisioned docker machine in 4m37.399136623s
	I1114 15:53:16.617196  876065 fix.go:56] fixHost completed within 4m37.422027817s
	I1114 15:53:16.617203  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 4m37.422123699s
	W1114 15:53:16.617282  876065 start.go:691] error starting host: provision: host is not running
	W1114 15:53:16.617491  876065 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1114 15:53:16.617502  876065 start.go:706] Will try again in 5 seconds ...
	I1114 15:53:16.640137  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Start
	I1114 15:53:16.640330  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring networks are active...
	I1114 15:53:16.641029  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network default is active
	I1114 15:53:16.641386  876220 main.go:141] libmachine: (embed-certs-279880) Ensuring network mk-embed-certs-279880 is active
	I1114 15:53:16.641738  876220 main.go:141] libmachine: (embed-certs-279880) Getting domain xml...
	I1114 15:53:16.642488  876220 main.go:141] libmachine: (embed-certs-279880) Creating domain...
	I1114 15:53:17.858298  876220 main.go:141] libmachine: (embed-certs-279880) Waiting to get IP...
	I1114 15:53:17.859506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:17.859912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:17.860039  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:17.859881  877182 retry.go:31] will retry after 225.269159ms: waiting for machine to come up
	I1114 15:53:18.086611  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.087099  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.087135  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.087062  877182 retry.go:31] will retry after 322.840106ms: waiting for machine to come up
	I1114 15:53:18.411781  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.412238  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.412278  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.412127  877182 retry.go:31] will retry after 459.77881ms: waiting for machine to come up
	I1114 15:53:18.873994  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:18.874393  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:18.874433  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:18.874341  877182 retry.go:31] will retry after 460.123636ms: waiting for machine to come up
	I1114 15:53:19.335916  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.336488  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.336520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.336414  877182 retry.go:31] will retry after 526.141665ms: waiting for machine to come up
	I1114 15:53:19.864336  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:19.864830  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:19.864856  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:19.864766  877182 retry.go:31] will retry after 817.261813ms: waiting for machine to come up
	I1114 15:53:20.683806  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:20.684289  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:20.684309  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:20.684244  877182 retry.go:31] will retry after 1.026381849s: waiting for machine to come up
	I1114 15:53:21.619196  876065 start.go:365] acquiring machines lock for no-preload-490998: {Name:mkb294d45e5af5635c8946ced0a33ff21c5efba3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 15:53:21.712760  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:21.713237  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:21.713263  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:21.713201  877182 retry.go:31] will retry after 1.088672482s: waiting for machine to come up
	I1114 15:53:22.803222  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:22.803698  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:22.803734  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:22.803639  877182 retry.go:31] will retry after 1.394534659s: waiting for machine to come up
	I1114 15:53:24.199372  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:24.199764  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:24.199794  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:24.199706  877182 retry.go:31] will retry after 1.511094366s: waiting for machine to come up
	I1114 15:53:25.713650  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:25.714062  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:25.714107  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:25.713980  877182 retry.go:31] will retry after 1.821074261s: waiting for machine to come up
	I1114 15:53:27.536875  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:27.537423  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:27.537458  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:27.537349  877182 retry.go:31] will retry after 2.856840662s: waiting for machine to come up
	I1114 15:53:30.395562  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:30.396059  876220 main.go:141] libmachine: (embed-certs-279880) DBG | unable to find current IP address of domain embed-certs-279880 in network mk-embed-certs-279880
	I1114 15:53:30.396086  876220 main.go:141] libmachine: (embed-certs-279880) DBG | I1114 15:53:30.396007  877182 retry.go:31] will retry after 4.003431067s: waiting for machine to come up
	I1114 15:53:35.689894  876396 start.go:369] acquired machines lock for "old-k8s-version-842105" in 4m23.221865246s
	I1114 15:53:35.689964  876396 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:35.689973  876396 fix.go:54] fixHost starting: 
	I1114 15:53:35.690410  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:35.690446  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:35.709418  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I1114 15:53:35.709816  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:35.710366  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:53:35.710400  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:35.710760  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:35.710946  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:35.711101  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:53:35.712666  876396 fix.go:102] recreateIfNeeded on old-k8s-version-842105: state=Stopped err=<nil>
	I1114 15:53:35.712696  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	W1114 15:53:35.712882  876396 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:35.715357  876396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-842105" ...
	I1114 15:53:34.403163  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.403706  876220 main.go:141] libmachine: (embed-certs-279880) Found IP for machine: 192.168.39.147
	I1114 15:53:34.403737  876220 main.go:141] libmachine: (embed-certs-279880) Reserving static IP address...
	I1114 15:53:34.403757  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has current primary IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.404290  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.404318  876220 main.go:141] libmachine: (embed-certs-279880) DBG | skip adding static IP to network mk-embed-certs-279880 - found existing host DHCP lease matching {name: "embed-certs-279880", mac: "52:54:00:50:2f:80", ip: "192.168.39.147"}
	I1114 15:53:34.404331  876220 main.go:141] libmachine: (embed-certs-279880) Reserved static IP address: 192.168.39.147
	I1114 15:53:34.404343  876220 main.go:141] libmachine: (embed-certs-279880) Waiting for SSH to be available...
	I1114 15:53:34.404351  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Getting to WaitForSSH function...
	I1114 15:53:34.406833  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407219  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.407248  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.407367  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH client type: external
	I1114 15:53:34.407412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa (-rw-------)
	I1114 15:53:34.407481  876220 main.go:141] libmachine: (embed-certs-279880) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:34.407525  876220 main.go:141] libmachine: (embed-certs-279880) DBG | About to run SSH command:
	I1114 15:53:34.407551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | exit 0
	I1114 15:53:34.504225  876220 main.go:141] libmachine: (embed-certs-279880) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:34.504696  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetConfigRaw
	I1114 15:53:34.505414  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.508202  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.508632  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.508685  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.509034  876220 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/config.json ...
	I1114 15:53:34.509282  876220 machine.go:88] provisioning docker machine ...
	I1114 15:53:34.509309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:34.509521  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509730  876220 buildroot.go:166] provisioning hostname "embed-certs-279880"
	I1114 15:53:34.509758  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.509894  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.511987  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512285  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.512317  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.512472  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.512629  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512751  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.512925  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.513118  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.513578  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.513594  876220 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-279880 && echo "embed-certs-279880" | sudo tee /etc/hostname
	I1114 15:53:34.664546  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-279880
	
	I1114 15:53:34.664595  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.667537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.667908  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.667941  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.668142  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.668388  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668631  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.668910  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.669142  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:34.669652  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:34.669684  876220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-279880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-279880/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-279880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:34.810710  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:34.810745  876220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:34.810768  876220 buildroot.go:174] setting up certificates
	I1114 15:53:34.810780  876220 provision.go:83] configureAuth start
	I1114 15:53:34.810788  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetMachineName
	I1114 15:53:34.811138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:34.814056  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814506  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.814537  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.814747  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.817131  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817513  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.817544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.817675  876220 provision.go:138] copyHostCerts
	I1114 15:53:34.817774  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:34.817789  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:34.817869  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:34.817990  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:34.818006  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:34.818039  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:34.818117  876220 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:34.818129  876220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:34.818161  876220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:34.818226  876220 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.embed-certs-279880 san=[192.168.39.147 192.168.39.147 localhost 127.0.0.1 minikube embed-certs-279880]
	I1114 15:53:34.925955  876220 provision.go:172] copyRemoteCerts
	I1114 15:53:34.926014  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:34.926039  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:34.928954  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929322  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:34.929346  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:34.929520  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:34.929703  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:34.929866  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:34.930033  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.026199  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:35.049682  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1114 15:53:35.072415  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:53:35.097200  876220 provision.go:86] duration metric: configureAuth took 286.405404ms
	I1114 15:53:35.097226  876220 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:35.097425  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:53:35.097558  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.100561  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.100912  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.100965  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.101091  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.101296  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101500  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.101641  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.101795  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.102165  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.102198  876220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:35.411682  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:35.411719  876220 machine.go:91] provisioned docker machine in 902.419916ms
	I1114 15:53:35.411733  876220 start.go:300] post-start starting for "embed-certs-279880" (driver="kvm2")
	I1114 15:53:35.411748  876220 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:35.411770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.412161  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:35.412201  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.415071  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415520  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.415551  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.415666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.415849  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.416000  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.416143  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.512565  876220 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:35.517087  876220 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:35.517146  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:35.517235  876220 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:35.517356  876220 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:35.517511  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:35.527797  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:35.552798  876220 start.go:303] post-start completed in 141.045785ms
	I1114 15:53:35.552827  876220 fix.go:56] fixHost completed within 18.935326604s
	I1114 15:53:35.552855  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.555540  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.555930  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.555970  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.556155  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.556390  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556573  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.556770  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.557007  876220 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:35.557338  876220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1114 15:53:35.557348  876220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:35.689729  876220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977215.639237319
	
	I1114 15:53:35.689759  876220 fix.go:206] guest clock: 1699977215.639237319
	I1114 15:53:35.689769  876220 fix.go:219] Guest: 2023-11-14 15:53:35.639237319 +0000 UTC Remote: 2023-11-14 15:53:35.552830911 +0000 UTC m=+284.779127994 (delta=86.406408ms)
	I1114 15:53:35.689801  876220 fix.go:190] guest clock delta is within tolerance: 86.406408ms
	I1114 15:53:35.689807  876220 start.go:83] releasing machines lock for "embed-certs-279880", held for 19.072338997s
	I1114 15:53:35.689842  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.690197  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:35.692786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693260  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.693311  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.693440  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694011  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694222  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:53:35.694338  876220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:35.694404  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.694455  876220 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:35.694484  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:53:35.697198  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697220  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697702  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697732  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697771  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:35.697786  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:35.697865  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698085  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698088  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:53:35.698297  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:53:35.698303  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698438  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:53:35.698562  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.698974  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:53:35.813318  876220 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:35.819124  876220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:35.957038  876220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:35.964876  876220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:35.964984  876220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:35.980758  876220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:35.980780  876220 start.go:472] detecting cgroup driver to use...
	I1114 15:53:35.980848  876220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:35.993968  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:36.006564  876220 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:36.006626  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:36.021314  876220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:36.035842  876220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:36.147617  876220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:36.268025  876220 docker.go:219] disabling docker service ...
	I1114 15:53:36.268113  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:36.280847  876220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:36.292659  876220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:36.414923  876220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:36.534481  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:36.547652  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:36.565229  876220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:53:36.565312  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.574949  876220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:36.575030  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.585105  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.594790  876220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:36.603613  876220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:36.613116  876220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:36.620828  876220 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:36.620884  876220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:36.632600  876220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:36.642150  876220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:36.756773  876220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:36.929381  876220 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:36.929467  876220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:36.934735  876220 start.go:540] Will wait 60s for crictl version
	I1114 15:53:36.934790  876220 ssh_runner.go:195] Run: which crictl
	I1114 15:53:36.940182  876220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:36.991630  876220 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:36.991718  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.045160  876220 ssh_runner.go:195] Run: crio --version
	I1114 15:53:37.097281  876220 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:53:35.716835  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Start
	I1114 15:53:35.716987  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring networks are active...
	I1114 15:53:35.717715  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network default is active
	I1114 15:53:35.718030  876396 main.go:141] libmachine: (old-k8s-version-842105) Ensuring network mk-old-k8s-version-842105 is active
	I1114 15:53:35.718429  876396 main.go:141] libmachine: (old-k8s-version-842105) Getting domain xml...
	I1114 15:53:35.719055  876396 main.go:141] libmachine: (old-k8s-version-842105) Creating domain...
	I1114 15:53:36.991860  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting to get IP...
	I1114 15:53:36.992911  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:36.993376  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:36.993427  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:36.993318  877295 retry.go:31] will retry after 227.553321ms: waiting for machine to come up
	I1114 15:53:37.223023  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.223561  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.223629  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.223511  877295 retry.go:31] will retry after 308.951372ms: waiting for machine to come up
	I1114 15:53:37.098693  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetIP
	I1114 15:53:37.102205  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102676  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:53:37.102710  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:53:37.102955  876220 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:37.107113  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:37.120009  876220 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:53:37.120075  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:37.160178  876220 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:53:37.160241  876220 ssh_runner.go:195] Run: which lz4
	I1114 15:53:37.164351  876220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:53:37.168645  876220 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:53:37.168684  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:53:39.026796  876220 crio.go:444] Took 1.862508 seconds to copy over tarball
	I1114 15:53:39.026876  876220 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:53:37.534243  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.534797  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.534827  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.534774  877295 retry.go:31] will retry after 440.76682ms: waiting for machine to come up
	I1114 15:53:37.977712  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:37.978257  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:37.978287  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:37.978207  877295 retry.go:31] will retry after 402.601155ms: waiting for machine to come up
	I1114 15:53:38.383001  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.383515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.383551  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.383468  877295 retry.go:31] will retry after 580.977501ms: waiting for machine to come up
	I1114 15:53:38.966457  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:38.967088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:38.967121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:38.967026  877295 retry.go:31] will retry after 679.65563ms: waiting for machine to come up
	I1114 15:53:39.648086  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:39.648560  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:39.648593  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:39.648501  877295 retry.go:31] will retry after 1.014858956s: waiting for machine to come up
	I1114 15:53:40.664728  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:40.665285  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:40.665321  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:40.665230  877295 retry.go:31] will retry after 1.035036164s: waiting for machine to come up
	I1114 15:53:41.701639  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:41.702088  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:41.702123  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:41.702029  877295 retry.go:31] will retry after 1.15711647s: waiting for machine to come up
	I1114 15:53:41.885259  876220 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.858355323s)
	I1114 15:53:41.885288  876220 crio.go:451] Took 2.858463 seconds to extract the tarball
	I1114 15:53:41.885300  876220 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:53:41.926498  876220 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:53:41.972943  876220 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:53:41.972980  876220 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:53:41.973053  876220 ssh_runner.go:195] Run: crio config
	I1114 15:53:42.038006  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:53:42.038032  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:53:42.038053  876220 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:53:42.038071  876220 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-279880 NodeName:embed-certs-279880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:53:42.038234  876220 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-279880"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:53:42.038323  876220 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-279880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:53:42.038394  876220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:53:42.050379  876220 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:53:42.050462  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:53:42.058921  876220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1114 15:53:42.074304  876220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:53:42.090403  876220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1114 15:53:42.106412  876220 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I1114 15:53:42.109907  876220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:42.122915  876220 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880 for IP: 192.168.39.147
	I1114 15:53:42.122945  876220 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:53:42.123106  876220 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:53:42.123148  876220 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:53:42.123226  876220 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/client.key
	I1114 15:53:42.123290  876220 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key.a88b087d
	I1114 15:53:42.123322  876220 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key
	I1114 15:53:42.123430  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:53:42.123456  876220 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:53:42.123467  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:53:42.123486  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:53:42.123517  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:53:42.123541  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:53:42.123584  876220 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:42.124261  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:53:42.149787  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:53:42.177563  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:53:42.203326  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/embed-certs-279880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:53:42.228832  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:53:42.254674  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:53:42.280548  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:53:42.305318  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:53:42.331461  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:53:42.356555  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:53:42.382688  876220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:53:42.407945  876220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:53:42.424902  876220 ssh_runner.go:195] Run: openssl version
	I1114 15:53:42.430411  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:53:42.443033  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448429  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.448496  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:53:42.455631  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:53:42.466421  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:53:42.476013  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480381  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.480434  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:53:42.486048  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:53:42.495375  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:53:42.505366  876220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509762  876220 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.509804  876220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:53:42.515519  876220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:53:42.524838  876220 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:53:42.528912  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:53:42.534641  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:53:42.540138  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:53:42.545849  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:53:42.551518  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:53:42.559001  876220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:53:42.566135  876220 kubeadm.go:404] StartCluster: {Name:embed-certs-279880 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-279880 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:53:42.566241  876220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:53:42.566297  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:42.613075  876220 cri.go:89] found id: ""
	I1114 15:53:42.613158  876220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:53:42.622675  876220 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:53:42.622696  876220 kubeadm.go:636] restartCluster start
	I1114 15:53:42.622785  876220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:53:42.631529  876220 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.633202  876220 kubeconfig.go:92] found "embed-certs-279880" server: "https://192.168.39.147:8443"
	I1114 15:53:42.636588  876220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:53:42.645531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.645578  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.656733  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.656764  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:42.656807  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:42.667524  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.168290  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.168372  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.181051  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:43.668650  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:43.668772  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:43.681727  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.168359  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.168462  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.182012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:44.668666  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:44.668763  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:44.680872  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.168505  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.168625  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.180321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:45.667875  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:45.668016  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:45.680318  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:42.861352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:42.861900  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:42.861963  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:42.861836  877295 retry.go:31] will retry after 2.117184279s: waiting for machine to come up
	I1114 15:53:44.982059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:44.982506  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:44.982538  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:44.982449  877295 retry.go:31] will retry after 2.3999215s: waiting for machine to come up
	I1114 15:53:46.168271  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.168410  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.180809  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:46.667886  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:46.668009  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:46.679468  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.168072  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.168204  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.180268  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.667786  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:47.667948  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:47.678927  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.168531  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.168660  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.180004  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:48.668597  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:48.668752  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:48.680945  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.168543  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.168635  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.180012  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:49.668382  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:49.668486  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:49.683691  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.168265  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.168353  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.179169  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:50.667618  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:50.667730  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:50.678707  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:47.384177  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:47.384695  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:47.384734  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:47.384649  877295 retry.go:31] will retry after 2.820309413s: waiting for machine to come up
	I1114 15:53:50.208736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:50.209188  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:50.209221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:50.209130  877295 retry.go:31] will retry after 2.822648093s: waiting for machine to come up
	I1114 15:53:51.168046  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.168144  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.179168  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:51.668301  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:51.668407  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:51.680321  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.167971  876220 api_server.go:166] Checking apiserver status ...
	I1114 15:53:52.168062  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:53:52.179159  876220 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:53:52.645656  876220 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:53:52.645688  876220 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:53:52.645702  876220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:53:52.645806  876220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:53:52.682368  876220 cri.go:89] found id: ""
	I1114 15:53:52.682482  876220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:53:52.697186  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:53:52.705449  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:53:52.705503  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714019  876220 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:53:52.714054  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:52.831334  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.796131  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:53.984427  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.060195  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:53:54.137132  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:53:54.137217  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.155040  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:54.676264  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.176129  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:55.676614  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:53.034614  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:53.035044  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | unable to find current IP address of domain old-k8s-version-842105 in network mk-old-k8s-version-842105
	I1114 15:53:53.035078  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | I1114 15:53:53.034993  877295 retry.go:31] will retry after 4.160398149s: waiting for machine to come up
	I1114 15:53:57.196776  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197211  876396 main.go:141] libmachine: (old-k8s-version-842105) Found IP for machine: 192.168.72.151
	I1114 15:53:57.197240  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserving static IP address...
	I1114 15:53:57.197260  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has current primary IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.197667  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.197700  876396 main.go:141] libmachine: (old-k8s-version-842105) Reserved static IP address: 192.168.72.151
	I1114 15:53:57.197724  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | skip adding static IP to network mk-old-k8s-version-842105 - found existing host DHCP lease matching {name: "old-k8s-version-842105", mac: "52:54:00:d4:79:07", ip: "192.168.72.151"}
	I1114 15:53:57.197742  876396 main.go:141] libmachine: (old-k8s-version-842105) Waiting for SSH to be available...
	I1114 15:53:57.197754  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Getting to WaitForSSH function...
	I1114 15:53:57.200279  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200646  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.200681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.200916  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH client type: external
	I1114 15:53:57.200948  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa (-rw-------)
	I1114 15:53:57.200983  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:53:57.200999  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | About to run SSH command:
	I1114 15:53:57.201015  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | exit 0
	I1114 15:53:57.288554  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | SSH cmd err, output: <nil>: 
	I1114 15:53:57.288904  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetConfigRaw
	I1114 15:53:57.289691  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.292087  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292445  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.292501  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.292720  876396 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/config.json ...
	I1114 15:53:57.292930  876396 machine.go:88] provisioning docker machine ...
	I1114 15:53:57.292950  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:57.293164  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293326  876396 buildroot.go:166] provisioning hostname "old-k8s-version-842105"
	I1114 15:53:57.293352  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.293472  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.295765  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296139  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.296170  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.296299  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.296470  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296625  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.296768  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.296945  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.297524  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.297546  876396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-842105 && echo "old-k8s-version-842105" | sudo tee /etc/hostname
	I1114 15:53:58.537304  876668 start.go:369] acquired machines lock for "default-k8s-diff-port-529430" in 4m8.43196122s
	I1114 15:53:58.537380  876668 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:53:58.537392  876668 fix.go:54] fixHost starting: 
	I1114 15:53:58.537828  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:53:58.537865  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:53:58.555361  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I1114 15:53:58.555809  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:53:58.556346  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:53:58.556379  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:53:58.556762  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:53:58.556993  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:53:58.557144  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:53:58.558707  876668 fix.go:102] recreateIfNeeded on default-k8s-diff-port-529430: state=Stopped err=<nil>
	I1114 15:53:58.558736  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	W1114 15:53:58.558888  876668 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:53:58.561175  876668 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-529430" ...
	I1114 15:53:57.423888  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-842105
	
	I1114 15:53:57.423971  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.427115  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427421  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.427459  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.427660  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.427882  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428135  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.428351  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.428584  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.429089  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.429124  876396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-842105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-842105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-842105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:53:57.554847  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:53:57.554893  876396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:53:57.554957  876396 buildroot.go:174] setting up certificates
	I1114 15:53:57.554974  876396 provision.go:83] configureAuth start
	I1114 15:53:57.554989  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetMachineName
	I1114 15:53:57.555342  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:57.558305  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558681  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.558711  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.558876  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.561568  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.561937  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.561973  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.562106  876396 provision.go:138] copyHostCerts
	I1114 15:53:57.562196  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:53:57.562218  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:53:57.562284  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:53:57.562402  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:53:57.562413  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:53:57.562445  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:53:57.562520  876396 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:53:57.562532  876396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:53:57.562561  876396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:53:57.562631  876396 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-842105 san=[192.168.72.151 192.168.72.151 localhost 127.0.0.1 minikube old-k8s-version-842105]
	I1114 15:53:57.825621  876396 provision.go:172] copyRemoteCerts
	I1114 15:53:57.825706  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:53:57.825739  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.828352  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828732  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.828778  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.828924  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.829159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.829356  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.829505  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:57.913614  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:53:57.935960  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 15:53:57.957927  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:53:57.980061  876396 provision.go:86] duration metric: configureAuth took 425.071777ms
	I1114 15:53:57.980109  876396 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:53:57.980308  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:53:57.980405  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:57.983736  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984128  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:57.984161  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:57.984367  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:57.984574  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:57.984927  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:57.985116  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:57.985478  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:57.985505  876396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:53:58.297063  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:53:58.297107  876396 machine.go:91] provisioned docker machine in 1.004160647s
	I1114 15:53:58.297121  876396 start.go:300] post-start starting for "old-k8s-version-842105" (driver="kvm2")
	I1114 15:53:58.297135  876396 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:53:58.297159  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.297578  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:53:58.297624  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.300608  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301051  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.301081  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.301312  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.301485  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.301655  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.301774  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.387785  876396 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:53:58.391947  876396 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:53:58.391974  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:53:58.392056  876396 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:53:58.392177  876396 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:53:58.392301  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:53:58.401525  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:53:58.422853  876396 start.go:303] post-start completed in 125.713467ms
	I1114 15:53:58.422892  876396 fix.go:56] fixHost completed within 22.732917848s
	I1114 15:53:58.422922  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.425682  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426059  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.426098  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.426282  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.426487  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426663  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.426830  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.427040  876396 main.go:141] libmachine: Using SSH client type: native
	I1114 15:53:58.427400  876396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I1114 15:53:58.427416  876396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:53:58.537121  876396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977238.485050071
	
	I1114 15:53:58.537151  876396 fix.go:206] guest clock: 1699977238.485050071
	I1114 15:53:58.537161  876396 fix.go:219] Guest: 2023-11-14 15:53:58.485050071 +0000 UTC Remote: 2023-11-14 15:53:58.422897103 +0000 UTC m=+286.112017318 (delta=62.152968ms)
	I1114 15:53:58.537187  876396 fix.go:190] guest clock delta is within tolerance: 62.152968ms
	I1114 15:53:58.537206  876396 start.go:83] releasing machines lock for "old-k8s-version-842105", held for 22.847251095s
	I1114 15:53:58.537248  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.537548  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:58.540515  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.540932  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.540974  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.541110  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541612  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.541912  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:53:58.542012  876396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:53:58.542077  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.542171  876396 ssh_runner.go:195] Run: cat /version.json
	I1114 15:53:58.542202  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:53:58.544841  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545190  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545221  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545246  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545465  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.545666  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.545694  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:58.545714  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:58.545816  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.545905  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:53:58.546006  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.546067  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:53:58.546212  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:53:58.546365  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:53:58.626301  876396 ssh_runner.go:195] Run: systemctl --version
	I1114 15:53:58.651834  876396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:53:58.799770  876396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:53:58.806042  876396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:53:58.806134  876396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:53:58.824707  876396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:53:58.824752  876396 start.go:472] detecting cgroup driver to use...
	I1114 15:53:58.824824  876396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:53:58.840144  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:53:58.854846  876396 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:53:58.854905  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:53:58.869926  876396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:53:58.883517  876396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:53:58.990519  876396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:53:59.108637  876396 docker.go:219] disabling docker service ...
	I1114 15:53:59.108712  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:53:59.124681  876396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:53:59.138748  876396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:53:59.260422  876396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:53:59.364365  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:53:59.376773  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:53:59.394948  876396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1114 15:53:59.395027  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.404000  876396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:53:59.404074  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.412822  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.421316  876396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:53:59.429685  876396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:53:59.438818  876396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:53:59.446459  876396 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:53:59.446533  876396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:53:59.459160  876396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:53:59.467670  876396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:53:59.579125  876396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:53:59.794436  876396 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:53:59.794525  876396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:53:59.801013  876396 start.go:540] Will wait 60s for crictl version
	I1114 15:53:59.801095  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:53:59.805735  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:53:59.851270  876396 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:53:59.851383  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.898885  876396 ssh_runner.go:195] Run: crio --version
	I1114 15:53:59.953911  876396 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1114 15:53:58.562788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Start
	I1114 15:53:58.562971  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring networks are active...
	I1114 15:53:58.563570  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network default is active
	I1114 15:53:58.564001  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Ensuring network mk-default-k8s-diff-port-529430 is active
	I1114 15:53:58.564406  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Getting domain xml...
	I1114 15:53:58.565186  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Creating domain...
	I1114 15:53:59.907130  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting to get IP...
	I1114 15:53:59.908507  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.908991  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:53:59.909128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:53:59.908977  877437 retry.go:31] will retry after 306.122553ms: waiting for machine to come up
	I1114 15:53:56.176595  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.676568  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:53:56.699015  876220 api_server.go:72] duration metric: took 2.561885213s to wait for apiserver process to appear ...
	I1114 15:53:56.699041  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:53:56.699058  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:53:59.955466  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetIP
	I1114 15:53:59.959121  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959545  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:53:59.959572  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:53:59.959896  876396 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1114 15:53:59.965859  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:53:59.982494  876396 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 15:53:59.982563  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:00.029401  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:00.029483  876396 ssh_runner.go:195] Run: which lz4
	I1114 15:54:00.034065  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:00.039738  876396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:00.039780  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1114 15:54:01.846049  876396 crio.go:444] Took 1.812024 seconds to copy over tarball
	I1114 15:54:01.846160  876396 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:01.387625  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.387668  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.387690  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.430505  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:01.430539  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:01.930801  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:01.937138  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:01.937169  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.431712  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.442719  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:02.442758  876220 api_server.go:103] status: https://192.168.39.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:02.931021  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:54:02.938062  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:54:02.947420  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:02.947453  876220 api_server.go:131] duration metric: took 6.248404315s to wait for apiserver health ...
	I1114 15:54:02.947465  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:54:02.947479  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:02.949231  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:00.216693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217419  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.217476  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.217346  877437 retry.go:31] will retry after 276.469735ms: waiting for machine to come up
	I1114 15:54:00.496200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496596  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.496632  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.496550  877437 retry.go:31] will retry after 390.20616ms: waiting for machine to come up
	I1114 15:54:00.888367  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889303  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:00.889341  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:00.889235  877437 retry.go:31] will retry after 551.896336ms: waiting for machine to come up
	I1114 15:54:01.443159  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443794  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:01.443825  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:01.443756  877437 retry.go:31] will retry after 655.228992ms: waiting for machine to come up
	I1114 15:54:02.100194  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100681  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.100716  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.100609  877437 retry.go:31] will retry after 896.817469ms: waiting for machine to come up
	I1114 15:54:02.999296  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999947  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:02.999979  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:02.999897  877437 retry.go:31] will retry after 1.177419274s: waiting for machine to come up
	I1114 15:54:04.178783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179425  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:04.179452  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:04.179351  877437 retry.go:31] will retry after 1.259348434s: waiting for machine to come up
	I1114 15:54:02.950643  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:02.986775  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:03.054339  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:03.074346  876220 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:03.074405  876220 system_pods.go:61] "coredns-5dd5756b68-gqxld" [0b846e58-0bbc-4770-94a4-8324753b36c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:03.074428  876220 system_pods.go:61] "etcd-embed-certs-279880" [e085e7a7-ec2e-4cf6-bbb2-d242a5e8d075] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:03.074442  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [4ffbfbaf-9978-4bb1-9e4e-ef23365f78fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:03.074455  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [d895906c-899f-41b3-9484-1a6985b978f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:03.074471  876220 system_pods.go:61] "kube-proxy-j2qnm" [feee8604-a749-4908-8361-42f63d55ec64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:03.074485  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [4325a0ba-9013-4899-b01b-befcb4cd5b72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:03.074504  876220 system_pods.go:61] "metrics-server-57f55c9bc5-gvtbw" [a7c44219-4b00-49c0-817f-68f9499f1ab8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:03.074531  876220 system_pods.go:61] "storage-provisioner" [f464123e-8329-4785-87ae-78ff30ac7d27] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:03.074547  876220 system_pods.go:74] duration metric: took 20.179327ms to wait for pod list to return data ...
	I1114 15:54:03.074558  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:03.078482  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:03.078526  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:03.078542  876220 node_conditions.go:105] duration metric: took 3.972732ms to run NodePressure ...
	I1114 15:54:03.078565  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:03.514232  876220 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521097  876220 kubeadm.go:787] kubelet initialised
	I1114 15:54:03.521125  876220 kubeadm.go:788] duration metric: took 6.859971ms waiting for restarted kubelet to initialise ...
	I1114 15:54:03.521168  876220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:03.528777  876220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:05.249338  876396 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.403140591s)
	I1114 15:54:05.249383  876396 crio.go:451] Took 3.403300 seconds to extract the tarball
	I1114 15:54:05.249397  876396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:05.298779  876396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:05.351838  876396 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1114 15:54:05.351873  876396 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:05.352034  876396 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.352124  876396 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.352201  876396 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.352219  876396 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.352035  876396 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.352067  876396 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.352087  876396 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354089  876396 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1114 15:54:05.354101  876396 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.354115  876396 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.354117  876396 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.354097  876396 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.354178  876396 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.354197  876396 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.354270  876396 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.512829  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.521658  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.529228  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1114 15:54:05.529451  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.529597  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.529802  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.534672  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.613591  876396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1114 15:54:05.613650  876396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.613721  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.644613  876396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:05.668090  876396 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1114 15:54:05.668167  876396 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.668231  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.685343  876396 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1114 15:54:05.685398  876396 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1114 15:54:05.685458  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725459  876396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1114 15:54:05.725508  876396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.725523  876396 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1114 15:54:05.725561  876396 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.725565  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.725602  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727180  876396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1114 15:54:05.727215  876396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.727249  876396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1114 15:54:05.727283  876396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.727254  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.727322  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1114 15:54:05.727325  876396 ssh_runner.go:195] Run: which crictl
	I1114 15:54:05.849608  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1114 15:54:05.849657  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1114 15:54:05.849694  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1114 15:54:05.849747  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1114 15:54:05.849753  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1114 15:54:05.849830  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1114 15:54:05.849847  876396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1114 15:54:05.990379  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1114 15:54:05.990536  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1114 15:54:06.006943  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1114 15:54:06.006966  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1114 15:54:06.007017  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1114 15:54:06.007076  876396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.007134  876396 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1114 15:54:06.013121  876396 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1114 15:54:06.013141  876396 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1114 15:54:06.013192  876396 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1114 15:54:05.440685  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441307  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:05.441342  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:05.441243  877437 retry.go:31] will retry after 1.84307404s: waiting for machine to come up
	I1114 15:54:07.286027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286581  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:07.286612  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:07.286501  877437 retry.go:31] will retry after 2.149522769s: waiting for machine to come up
	I1114 15:54:09.437500  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.437998  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:09.438027  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:09.437930  877437 retry.go:31] will retry after 1.825733531s: waiting for machine to come up
	I1114 15:54:06.558998  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.056443  876220 pod_ready.go:102] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:09.550292  876220 pod_ready.go:92] pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:09.550325  876220 pod_ready.go:81] duration metric: took 6.02152032s waiting for pod "coredns-5dd5756b68-gqxld" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:09.550338  876220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:07.587512  876396 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.574275406s)
	I1114 15:54:07.587549  876396 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1114 15:54:07.587609  876396 cache_images.go:92] LoadImages completed in 2.235719587s
	W1114 15:54:07.587745  876396 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1114 15:54:07.587935  876396 ssh_runner.go:195] Run: crio config
	I1114 15:54:07.677561  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:07.677590  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:07.677624  876396 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:07.677649  876396 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-842105 NodeName:old-k8s-version-842105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 15:54:07.677852  876396 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-842105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-842105
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.151:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:07.677991  876396 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-842105 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:54:07.678072  876396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1114 15:54:07.690041  876396 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:07.690195  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:07.699428  876396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1114 15:54:07.717871  876396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:07.736451  876396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1114 15:54:07.760405  876396 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:07.766002  876396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:07.782987  876396 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105 for IP: 192.168.72.151
	I1114 15:54:07.783024  876396 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:07.783232  876396 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:07.783328  876396 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:07.783435  876396 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/client.key
	I1114 15:54:07.783530  876396 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key.8e16fdf2
	I1114 15:54:07.783587  876396 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key
	I1114 15:54:07.783733  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:07.783774  876396 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:07.783788  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:07.783825  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:07.783860  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:07.783903  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:07.783976  876396 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:07.784951  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:07.817959  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:07.849497  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:07.882885  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/old-k8s-version-842105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 15:54:07.917706  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:07.951168  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:07.980449  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:08.004910  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:08.038634  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:08.068999  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:08.099934  876396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:08.131714  876396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:08.150662  876396 ssh_runner.go:195] Run: openssl version
	I1114 15:54:08.158258  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:08.168218  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173533  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.173650  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:08.179886  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:08.189654  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:08.199563  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204439  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.204512  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:08.210587  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:08.220509  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:08.233859  876396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240418  876396 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.240484  876396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:08.248025  876396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:08.261693  876396 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:08.267518  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:08.275553  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:08.283812  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:08.292063  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:08.299976  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:08.307726  876396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:08.315248  876396 kubeadm.go:404] StartCluster: {Name:old-k8s-version-842105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-842105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:08.315441  876396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:08.315509  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:08.373222  876396 cri.go:89] found id: ""
	I1114 15:54:08.373309  876396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:08.386081  876396 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:08.386113  876396 kubeadm.go:636] restartCluster start
	I1114 15:54:08.386175  876396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:08.398113  876396 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.399779  876396 kubeconfig.go:92] found "old-k8s-version-842105" server: "https://192.168.72.151:8443"
	I1114 15:54:08.403355  876396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:08.415044  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.415107  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.431221  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.431246  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.431301  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.441629  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:08.941906  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:08.942002  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:08.953895  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.442080  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.442167  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.454396  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:09.941960  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:09.942060  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:09.957741  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.442467  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.442585  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.459029  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:10.942110  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:10.942218  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:10.958207  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.441724  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.441846  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.456551  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.942092  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:11.942207  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:11.954734  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:11.265162  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265717  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:11.265754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:11.265645  877437 retry.go:31] will retry after 3.454522942s: waiting for machine to come up
	I1114 15:54:14.722448  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722869  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | unable to find current IP address of domain default-k8s-diff-port-529430 in network mk-default-k8s-diff-port-529430
	I1114 15:54:14.722900  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | I1114 15:54:14.722811  877437 retry.go:31] will retry after 4.385736497s: waiting for machine to come up
	I1114 15:54:11.568989  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:11.569021  876220 pod_ready.go:81] duration metric: took 2.018672405s waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:11.569032  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:13.599380  876220 pod_ready.go:102] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:15.095781  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.095806  876220 pod_ready.go:81] duration metric: took 3.52676767s waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.095816  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101837  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.101860  876220 pod_ready.go:81] duration metric: took 6.035008ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.101871  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107099  876220 pod_ready.go:92] pod "kube-proxy-j2qnm" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.107119  876220 pod_ready.go:81] duration metric: took 5.239707ms waiting for pod "kube-proxy-j2qnm" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.107131  876220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146726  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:15.146753  876220 pod_ready.go:81] duration metric: took 39.614218ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:15.146765  876220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:12.442685  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.442780  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.456555  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:12.941805  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:12.941902  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:12.955572  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.442111  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.442220  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.455769  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:13.941932  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:13.942051  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:13.957167  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.442727  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.442855  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.455220  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:14.941815  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:14.941911  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:14.955030  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.441942  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.442064  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.454228  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:15.942207  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:15.942299  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:15.955845  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.442537  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.442642  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.454339  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:16.941837  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:16.941933  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:16.955292  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:19.110067  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.110621  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Found IP for machine: 192.168.61.196
	I1114 15:54:19.110650  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserving static IP address...
	I1114 15:54:19.110682  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has current primary IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.111082  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.111142  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | skip adding static IP to network mk-default-k8s-diff-port-529430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-529430", mac: "52:54:00:ee:13:ce", ip: "192.168.61.196"}
	I1114 15:54:19.111163  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Reserved static IP address: 192.168.61.196
	I1114 15:54:19.111178  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Waiting for SSH to be available...
	I1114 15:54:19.111191  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Getting to WaitForSSH function...
	I1114 15:54:19.113739  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114145  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.114196  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.114327  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH client type: external
	I1114 15:54:19.114358  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa (-rw-------)
	I1114 15:54:19.114395  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:19.114417  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | About to run SSH command:
	I1114 15:54:19.114432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | exit 0
	I1114 15:54:19.213651  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:19.214087  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetConfigRaw
	I1114 15:54:19.214767  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.217678  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218072  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.218099  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.218414  876668 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/config.json ...
	I1114 15:54:19.218634  876668 machine.go:88] provisioning docker machine ...
	I1114 15:54:19.218662  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:19.218923  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219132  876668 buildroot.go:166] provisioning hostname "default-k8s-diff-port-529430"
	I1114 15:54:19.219155  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.219292  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.221719  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222106  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.222129  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.222272  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.222435  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222606  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.222748  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.222907  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.223312  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.223328  876668 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-529430 && echo "default-k8s-diff-port-529430" | sudo tee /etc/hostname
	I1114 15:54:19.373658  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-529430
	
	I1114 15:54:19.373691  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.376972  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377388  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.377432  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.377549  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.377754  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.377934  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.378123  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.378325  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:19.378667  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:19.378685  876668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-529430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-529430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-529430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:19.523410  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:19.523453  876668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:19.523498  876668 buildroot.go:174] setting up certificates
	I1114 15:54:19.523511  876668 provision.go:83] configureAuth start
	I1114 15:54:19.523530  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetMachineName
	I1114 15:54:19.523872  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:19.526757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527213  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.527242  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.527502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.530193  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530590  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.530630  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.530794  876668 provision.go:138] copyHostCerts
	I1114 15:54:19.530862  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:19.530886  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:19.530965  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:19.531069  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:19.531078  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:19.531104  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:19.531179  876668 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:19.531188  876668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:19.531218  876668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:19.531285  876668 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-529430 san=[192.168.61.196 192.168.61.196 localhost 127.0.0.1 minikube default-k8s-diff-port-529430]
	I1114 15:54:19.845785  876668 provision.go:172] copyRemoteCerts
	I1114 15:54:19.845852  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:19.845880  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:19.849070  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849461  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:19.849492  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:19.849693  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:19.849916  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:19.850139  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:19.850326  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:19.946041  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:19.976301  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1114 15:54:20.667697  876065 start.go:369] acquired machines lock for "no-preload-490998" in 59.048435079s
	I1114 15:54:20.667765  876065 start.go:96] Skipping create...Using existing machine configuration
	I1114 15:54:20.667776  876065 fix.go:54] fixHost starting: 
	I1114 15:54:20.668233  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:20.668278  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:20.689041  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I1114 15:54:20.689574  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:20.690138  876065 main.go:141] libmachine: Using API Version  1
	I1114 15:54:20.690168  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:20.690554  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:20.690760  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:20.690909  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 15:54:20.692627  876065 fix.go:102] recreateIfNeeded on no-preload-490998: state=Stopped err=<nil>
	I1114 15:54:20.692652  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	W1114 15:54:20.692849  876065 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 15:54:20.694674  876065 out.go:177] * Restarting existing kvm2 VM for "no-preload-490998" ...
	I1114 15:54:17.454958  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:19.455250  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:20.001972  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 15:54:20.026531  876668 provision.go:86] duration metric: configureAuth took 502.998106ms
	I1114 15:54:20.026585  876668 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:20.026832  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:20.026965  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.030385  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.030791  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.030974  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.031200  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031423  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.031647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.031861  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.032341  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.032367  876668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:20.394771  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:20.394805  876668 machine.go:91] provisioned docker machine in 1.176155811s
	I1114 15:54:20.394818  876668 start.go:300] post-start starting for "default-k8s-diff-port-529430" (driver="kvm2")
	I1114 15:54:20.394832  876668 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:20.394853  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.395240  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:20.395288  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.398478  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.398906  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.398945  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.399107  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.399344  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.399584  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.399752  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.491251  876668 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:20.495507  876668 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:20.495538  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:20.495627  876668 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:20.495718  876668 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:20.495814  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:20.504112  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:20.527100  876668 start.go:303] post-start completed in 132.264495ms
	I1114 15:54:20.527124  876668 fix.go:56] fixHost completed within 21.989733182s
	I1114 15:54:20.527150  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.530055  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530460  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.530502  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.530660  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.530868  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531069  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.531281  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.531458  876668 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:20.531874  876668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.61.196 22 <nil> <nil>}
	I1114 15:54:20.531889  876668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:20.667502  876668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977260.612374456
	
	I1114 15:54:20.667529  876668 fix.go:206] guest clock: 1699977260.612374456
	I1114 15:54:20.667536  876668 fix.go:219] Guest: 2023-11-14 15:54:20.612374456 +0000 UTC Remote: 2023-11-14 15:54:20.527127621 +0000 UTC m=+270.585277055 (delta=85.246835ms)
	I1114 15:54:20.667591  876668 fix.go:190] guest clock delta is within tolerance: 85.246835ms
	I1114 15:54:20.667604  876668 start.go:83] releasing machines lock for "default-k8s-diff-port-529430", held for 22.130251397s
	I1114 15:54:20.667642  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.668017  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:20.671690  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672166  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.672199  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.672583  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673190  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673412  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:20.673507  876668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:20.673573  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.673677  876668 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:20.673702  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:20.677394  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677505  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.677813  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.677847  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678133  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:20.678165  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:20.678228  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678331  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:20.678456  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678543  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:20.678783  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:20.678799  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.679008  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:20.770378  876668 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:20.799026  876668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:20.952410  876668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:20.960020  876668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:20.960164  876668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:20.976497  876668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:20.976537  876668 start.go:472] detecting cgroup driver to use...
	I1114 15:54:20.976623  876668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:20.995510  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:21.008750  876668 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:21.008824  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:21.021811  876668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:21.035329  876668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:21.148775  876668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:21.285242  876668 docker.go:219] disabling docker service ...
	I1114 15:54:21.285318  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:21.298782  876668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:21.316123  876668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:21.488090  876668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:21.618889  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:21.632974  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:21.655781  876668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:21.655882  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.669231  876668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:21.669316  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.678786  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.688193  876668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:21.698797  876668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:21.709360  876668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:21.718312  876668 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:21.718380  876668 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:21.736502  876668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:21.746439  876668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:21.863214  876668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:22.102179  876668 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:22.102265  876668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:22.108046  876668 start.go:540] Will wait 60s for crictl version
	I1114 15:54:22.108121  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:54:22.113795  876668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:22.165127  876668 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:22.165229  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.225931  876668 ssh_runner.go:195] Run: crio --version
	I1114 15:54:22.294400  876668 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:17.442023  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.442115  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.454984  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:17.942288  876396 api_server.go:166] Checking apiserver status ...
	I1114 15:54:17.942367  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:17.954587  876396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:18.415437  876396 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:18.415476  876396 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:18.415510  876396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:18.415594  876396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:18.457148  876396 cri.go:89] found id: ""
	I1114 15:54:18.457220  876396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:18.473763  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:18.482554  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:18.482618  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491282  876396 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:18.491331  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:18.611750  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.639893  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.02808682s)
	I1114 15:54:19.639964  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.850775  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:19.939183  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:20.055296  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:20.055384  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.076978  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:20.591616  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.091982  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.591312  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:21.635294  876396 api_server.go:72] duration metric: took 1.579988958s to wait for apiserver process to appear ...
	I1114 15:54:21.635323  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:21.635345  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:20.696162  876065 main.go:141] libmachine: (no-preload-490998) Calling .Start
	I1114 15:54:20.696380  876065 main.go:141] libmachine: (no-preload-490998) Ensuring networks are active...
	I1114 15:54:20.697208  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network default is active
	I1114 15:54:20.697665  876065 main.go:141] libmachine: (no-preload-490998) Ensuring network mk-no-preload-490998 is active
	I1114 15:54:20.698105  876065 main.go:141] libmachine: (no-preload-490998) Getting domain xml...
	I1114 15:54:20.698815  876065 main.go:141] libmachine: (no-preload-490998) Creating domain...
	I1114 15:54:22.152078  876065 main.go:141] libmachine: (no-preload-490998) Waiting to get IP...
	I1114 15:54:22.153475  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.153983  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.154071  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.153960  877583 retry.go:31] will retry after 305.242943ms: waiting for machine to come up
	I1114 15:54:22.460636  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.461432  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.461609  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.461568  877583 retry.go:31] will retry after 354.226558ms: waiting for machine to come up
	I1114 15:54:22.817225  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:22.817884  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:22.817999  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:22.817955  877583 retry.go:31] will retry after 337.727596ms: waiting for machine to come up
	I1114 15:54:23.157897  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.158614  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.158724  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.158679  877583 retry.go:31] will retry after 375.356441ms: waiting for machine to come up
	I1114 15:54:23.536061  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:23.536607  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:23.536633  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:23.536565  877583 retry.go:31] will retry after 652.853452ms: waiting for machine to come up
	I1114 15:54:22.295757  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetIP
	I1114 15:54:22.299345  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.299749  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:22.299788  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:22.300017  876668 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:22.305363  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:22.318715  876668 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:22.318773  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:22.368522  876668 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:22.368595  876668 ssh_runner.go:195] Run: which lz4
	I1114 15:54:22.373798  876668 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 15:54:22.379337  876668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 15:54:22.379368  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1114 15:54:24.194028  876668 crio.go:444] Took 1.820276 seconds to copy over tarball
	I1114 15:54:24.194111  876668 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 15:54:21.457059  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:23.458432  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:26.636325  876396 api_server.go:269] stopped: https://192.168.72.151:8443/healthz: Get "https://192.168.72.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1114 15:54:26.636396  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:24.191080  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:24.191648  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:24.191685  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:24.191565  877583 retry.go:31] will retry after 883.93292ms: waiting for machine to come up
	I1114 15:54:25.076820  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:25.077325  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:25.077370  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:25.077290  877583 retry.go:31] will retry after 1.071889504s: waiting for machine to come up
	I1114 15:54:26.151239  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:26.151777  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:26.151812  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:26.151734  877583 retry.go:31] will retry after 1.05055701s: waiting for machine to come up
	I1114 15:54:27.204714  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:27.205193  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:27.205216  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:27.205147  877583 retry.go:31] will retry after 1.366779273s: waiting for machine to come up
	I1114 15:54:28.573131  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:28.573578  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:28.573605  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:28.573548  877583 retry.go:31] will retry after 1.629033633s: waiting for machine to come up
	I1114 15:54:27.635092  876668 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.440943465s)
	I1114 15:54:27.635134  876668 crio.go:451] Took 3.441078 seconds to extract the tarball
	I1114 15:54:27.635148  876668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 15:54:27.685486  876668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:27.742411  876668 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 15:54:27.742499  876668 cache_images.go:84] Images are preloaded, skipping loading
	I1114 15:54:27.742596  876668 ssh_runner.go:195] Run: crio config
	I1114 15:54:27.815555  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:27.815579  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:27.815601  876668 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:54:27.815624  876668 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.196 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-529430 NodeName:default-k8s-diff-port-529430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:54:27.815789  876668 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.196
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-529430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:54:27.815921  876668 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-529430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1114 15:54:27.815999  876668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:54:27.825716  876668 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:54:27.825799  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:54:27.838987  876668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1114 15:54:27.855187  876668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:54:27.872995  876668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1114 15:54:27.890455  876668 ssh_runner.go:195] Run: grep 192.168.61.196	control-plane.minikube.internal$ /etc/hosts
	I1114 15:54:27.895678  876668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:27.909953  876668 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430 for IP: 192.168.61.196
	I1114 15:54:27.909999  876668 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:27.910204  876668 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:54:27.910271  876668 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:54:27.910463  876668 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/client.key
	I1114 15:54:27.910558  876668 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key.0d67e2f2
	I1114 15:54:27.910616  876668 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key
	I1114 15:54:27.910753  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:54:27.910797  876668 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:54:27.910811  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:54:27.910872  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:54:27.910917  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:54:27.910950  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:54:27.911007  876668 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:27.911985  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:54:27.937341  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 15:54:27.963511  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:54:27.990011  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/default-k8s-diff-port-529430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:54:28.016668  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:54:28.048528  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:54:28.077392  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:54:28.107784  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:54:28.136600  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:54:28.163995  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:54:28.191715  876668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:54:28.223205  876668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:54:28.243672  876668 ssh_runner.go:195] Run: openssl version
	I1114 15:54:28.249895  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:54:28.260568  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266792  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.266887  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:54:28.273048  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:54:28.283458  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:54:28.294810  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300316  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.300384  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:54:28.306193  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:54:28.319260  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:54:28.332843  876668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339044  876668 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.339120  876668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:54:28.346094  876668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:54:28.359711  876668 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:54:28.365300  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:54:28.372965  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:54:28.380378  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:54:28.387801  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:54:28.395228  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:54:28.401252  876668 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:54:28.407435  876668 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-529430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-529430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:54:28.407581  876668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:54:28.407663  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:28.462877  876668 cri.go:89] found id: ""
	I1114 15:54:28.462962  876668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:54:28.473800  876668 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:54:28.473828  876668 kubeadm.go:636] restartCluster start
	I1114 15:54:28.473885  876668 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:54:28.485255  876668 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.486649  876668 kubeconfig.go:92] found "default-k8s-diff-port-529430" server: "https://192.168.61.196:8444"
	I1114 15:54:28.489408  876668 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:54:28.499927  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.499990  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.512175  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.512193  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:28.512238  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:28.524128  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.025143  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.025234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.040757  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:29.525035  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:29.525153  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:29.538214  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:28.174172  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:28.174207  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:28.674934  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.145414  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.145459  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.174596  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.231115  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.231157  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:29.674653  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:29.813013  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:29.813052  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.174424  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.183371  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1114 15:54:30.183427  876396 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1114 15:54:30.675007  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:54:30.686069  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:54:30.697376  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:54:30.697472  876396 api_server.go:131] duration metric: took 9.062139934s to wait for apiserver health ...
	I1114 15:54:30.697503  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:54:30.697535  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:30.699476  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:25.957052  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:28.490572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:30.701025  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:30.729153  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:30.770856  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:30.785989  876396 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:30.786041  876396 system_pods.go:61] "coredns-5644d7b6d9-dxtd8" [4d22eb1f-551c-49a1-a519-7420c3774e46] Running
	I1114 15:54:30.786051  876396 system_pods.go:61] "etcd-old-k8s-version-842105" [d4d5d869-b609-4017-8cf1-071b11f69d18] Running
	I1114 15:54:30.786057  876396 system_pods.go:61] "kube-apiserver-old-k8s-version-842105" [43e84141-4938-4808-bba5-14080a0a7b9e] Running
	I1114 15:54:30.786063  876396 system_pods.go:61] "kube-controller-manager-old-k8s-version-842105" [8fca7797-f3a1-4223-a921-0819aca95ce7] Running
	I1114 15:54:30.786069  876396 system_pods.go:61] "kube-proxy-kw2ns" [c6b5fbe3-a473-4120-bc41-fb85f6d3841d] Running
	I1114 15:54:30.786074  876396 system_pods.go:61] "kube-scheduler-old-k8s-version-842105" [c9cad8bb-b7a9-44fd-92d3-d3360284c9f3] Running
	I1114 15:54:30.786082  876396 system_pods.go:61] "metrics-server-74d5856cc6-q9hc5" [1333b6de-5f3f-4937-8e73-d2b7f2c6d37e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:30.786091  876396 system_pods.go:61] "storage-provisioner" [2d95ef7e-626e-4840-9f5d-708cd8c66576] Running
	I1114 15:54:30.786107  876396 system_pods.go:74] duration metric: took 15.207693ms to wait for pod list to return data ...
	I1114 15:54:30.786125  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:30.799034  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:30.799089  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:30.799105  876396 node_conditions.go:105] duration metric: took 12.974469ms to run NodePressure ...
	I1114 15:54:30.799137  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:31.065040  876396 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:31.068697  876396 retry.go:31] will retry after 147.435912ms: kubelet not initialised
	I1114 15:54:31.225671  876396 retry.go:31] will retry after 334.031544ms: kubelet not initialised
	I1114 15:54:31.565487  876396 retry.go:31] will retry after 641.328262ms: kubelet not initialised
	I1114 15:54:32.215327  876396 retry.go:31] will retry after 1.211422414s: kubelet not initialised
	I1114 15:54:30.204276  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:30.204775  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:30.204811  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:30.204713  877583 retry.go:31] will retry after 1.909641151s: waiting for machine to come up
	I1114 15:54:32.115658  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:32.116175  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:32.116209  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:32.116116  877583 retry.go:31] will retry after 3.266336566s: waiting for machine to come up
	I1114 15:54:30.024900  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.025024  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.041104  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.524842  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:30.524920  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:30.540643  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.025166  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.040723  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:31.525252  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:31.525364  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:31.537978  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.024495  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.024626  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.037625  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:32.524934  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:32.525053  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:32.540579  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.025237  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.025366  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.037675  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:33.524206  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:33.524300  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:33.537100  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.025150  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.025272  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.039435  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:34.525030  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:34.525140  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:34.541014  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:30.957869  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.458285  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:35.458815  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:33.432677  876396 retry.go:31] will retry after 864.36813ms: kubelet not initialised
	I1114 15:54:34.302450  876396 retry.go:31] will retry after 2.833071739s: kubelet not initialised
	I1114 15:54:37.142128  876396 retry.go:31] will retry after 2.880672349s: kubelet not initialised
	I1114 15:54:35.386010  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:35.386483  876065 main.go:141] libmachine: (no-preload-490998) DBG | unable to find current IP address of domain no-preload-490998 in network mk-no-preload-490998
	I1114 15:54:35.386526  876065 main.go:141] libmachine: (no-preload-490998) DBG | I1114 15:54:35.386417  877583 retry.go:31] will retry after 3.791360608s: waiting for machine to come up
	I1114 15:54:35.024814  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.024924  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.038035  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:35.524433  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:35.524540  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:35.538065  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.024585  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.024690  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.036540  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:36.525201  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:36.525293  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:36.537751  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.024292  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.024388  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.037480  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:37.525115  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:37.525234  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:37.538365  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.025002  876668 api_server.go:166] Checking apiserver status ...
	I1114 15:54:38.025148  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:54:38.036994  876668 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:54:38.500770  876668 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:54:38.500813  876668 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:54:38.500860  876668 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:54:38.500951  876668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:54:38.538468  876668 cri.go:89] found id: ""
	I1114 15:54:38.538571  876668 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:54:38.554809  876668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:54:38.563961  876668 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:54:38.564025  876668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572905  876668 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:54:38.572930  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:38.694403  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.614869  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.815977  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:39.914051  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:37.956992  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.957705  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:39.179165  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.179746  876065 main.go:141] libmachine: (no-preload-490998) Found IP for machine: 192.168.50.251
	I1114 15:54:39.179773  876065 main.go:141] libmachine: (no-preload-490998) Reserving static IP address...
	I1114 15:54:39.179792  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has current primary IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.180259  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.180295  876065 main.go:141] libmachine: (no-preload-490998) Reserved static IP address: 192.168.50.251
	I1114 15:54:39.180328  876065 main.go:141] libmachine: (no-preload-490998) DBG | skip adding static IP to network mk-no-preload-490998 - found existing host DHCP lease matching {name: "no-preload-490998", mac: "52:54:00:78:48:fe", ip: "192.168.50.251"}
	I1114 15:54:39.180349  876065 main.go:141] libmachine: (no-preload-490998) DBG | Getting to WaitForSSH function...
	I1114 15:54:39.180368  876065 main.go:141] libmachine: (no-preload-490998) Waiting for SSH to be available...
	I1114 15:54:39.182637  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183005  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.183037  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.183157  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH client type: external
	I1114 15:54:39.183185  876065 main.go:141] libmachine: (no-preload-490998) DBG | Using SSH private key: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa (-rw-------)
	I1114 15:54:39.183218  876065 main.go:141] libmachine: (no-preload-490998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 15:54:39.183239  876065 main.go:141] libmachine: (no-preload-490998) DBG | About to run SSH command:
	I1114 15:54:39.183251  876065 main.go:141] libmachine: (no-preload-490998) DBG | exit 0
	I1114 15:54:39.276793  876065 main.go:141] libmachine: (no-preload-490998) DBG | SSH cmd err, output: <nil>: 
	I1114 15:54:39.277095  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetConfigRaw
	I1114 15:54:39.277799  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.281002  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281360  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.281393  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.281696  876065 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/config.json ...
	I1114 15:54:39.281970  876065 machine.go:88] provisioning docker machine ...
	I1114 15:54:39.281997  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:39.282236  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282395  876065 buildroot.go:166] provisioning hostname "no-preload-490998"
	I1114 15:54:39.282416  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.282573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.285099  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285498  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.285527  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.285695  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.285865  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286026  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.286277  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.286523  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.286978  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.287007  876065 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-490998 && echo "no-preload-490998" | sudo tee /etc/hostname
	I1114 15:54:39.419452  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-490998
	
	I1114 15:54:39.419493  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.422544  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.422912  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.422951  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.423134  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.423360  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423591  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.423756  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.423915  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.424324  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.424363  876065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-490998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-490998/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-490998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 15:54:39.552044  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 15:54:39.552085  876065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17598-824991/.minikube CaCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17598-824991/.minikube}
	I1114 15:54:39.552106  876065 buildroot.go:174] setting up certificates
	I1114 15:54:39.552118  876065 provision.go:83] configureAuth start
	I1114 15:54:39.552127  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetMachineName
	I1114 15:54:39.552438  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:39.555275  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555660  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.555771  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.555936  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.558628  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559004  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.559042  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.559181  876065 provision.go:138] copyHostCerts
	I1114 15:54:39.559247  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem, removing ...
	I1114 15:54:39.559273  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem
	I1114 15:54:39.559337  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/ca.pem (1082 bytes)
	I1114 15:54:39.559498  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem, removing ...
	I1114 15:54:39.559512  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem
	I1114 15:54:39.559547  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/cert.pem (1123 bytes)
	I1114 15:54:39.559612  876065 exec_runner.go:144] found /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem, removing ...
	I1114 15:54:39.559620  876065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem
	I1114 15:54:39.559644  876065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17598-824991/.minikube/key.pem (1675 bytes)
	I1114 15:54:39.559697  876065 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem org=jenkins.no-preload-490998 san=[192.168.50.251 192.168.50.251 localhost 127.0.0.1 minikube no-preload-490998]
	I1114 15:54:39.728218  876065 provision.go:172] copyRemoteCerts
	I1114 15:54:39.728286  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 15:54:39.728314  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.731482  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.731920  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.731966  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.732138  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.732376  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.732605  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.732802  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:39.819537  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 15:54:39.848716  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 15:54:39.876339  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 15:54:39.917428  876065 provision.go:86] duration metric: configureAuth took 365.293803ms
	I1114 15:54:39.917461  876065 buildroot.go:189] setting minikube options for container-runtime
	I1114 15:54:39.917686  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:39.917783  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:39.920823  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921417  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:39.921457  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:39.921785  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:39.921989  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922170  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:39.922351  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:39.922516  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:39.922992  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:39.923017  876065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 15:54:40.270821  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 15:54:40.270851  876065 machine.go:91] provisioned docker machine in 988.864728ms
	I1114 15:54:40.270865  876065 start.go:300] post-start starting for "no-preload-490998" (driver="kvm2")
	I1114 15:54:40.270878  876065 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 15:54:40.270910  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.271296  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 15:54:40.271331  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.274197  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274517  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.274547  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.274784  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.275045  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.275209  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.275379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.363810  876065 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 15:54:40.368485  876065 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 15:54:40.368515  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/addons for local assets ...
	I1114 15:54:40.368599  876065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17598-824991/.minikube/files for local assets ...
	I1114 15:54:40.368688  876065 filesync.go:149] local asset: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem -> 8322112.pem in /etc/ssl/certs
	I1114 15:54:40.368820  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 15:54:40.378691  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:54:40.401789  876065 start.go:303] post-start completed in 130.90895ms
	I1114 15:54:40.401816  876065 fix.go:56] fixHost completed within 19.734039545s
	I1114 15:54:40.401848  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.404413  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404791  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.404824  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.404962  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.405212  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405442  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.405614  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.405840  876065 main.go:141] libmachine: Using SSH client type: native
	I1114 15:54:40.406318  876065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1114 15:54:40.406338  876065 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 15:54:40.521875  876065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699977280.490539427
	
	I1114 15:54:40.521907  876065 fix.go:206] guest clock: 1699977280.490539427
	I1114 15:54:40.521917  876065 fix.go:219] Guest: 2023-11-14 15:54:40.490539427 +0000 UTC Remote: 2023-11-14 15:54:40.401821935 +0000 UTC m=+361.372113130 (delta=88.717492ms)
	I1114 15:54:40.521945  876065 fix.go:190] guest clock delta is within tolerance: 88.717492ms
	I1114 15:54:40.521952  876065 start.go:83] releasing machines lock for "no-preload-490998", held for 19.854220019s
	I1114 15:54:40.521990  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.522294  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:40.525204  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525567  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.525611  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.525786  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526412  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526589  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 15:54:40.526682  876065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 15:54:40.526727  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.526847  876065 ssh_runner.go:195] Run: cat /version.json
	I1114 15:54:40.526881  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 15:54:40.529470  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529673  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.529863  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.529895  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530047  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530189  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:40.530224  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530226  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:40.530415  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 15:54:40.530480  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530594  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 15:54:40.530677  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.530726  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 15:54:40.530881  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 15:54:40.634647  876065 ssh_runner.go:195] Run: systemctl --version
	I1114 15:54:40.641680  876065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 15:54:40.784919  876065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 15:54:40.791364  876065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 15:54:40.791466  876065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 15:54:40.814464  876065 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 15:54:40.814496  876065 start.go:472] detecting cgroup driver to use...
	I1114 15:54:40.814608  876065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 15:54:40.834599  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 15:54:40.851666  876065 docker.go:203] disabling cri-docker service (if available) ...
	I1114 15:54:40.851761  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 15:54:40.870359  876065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 15:54:40.885345  876065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 15:54:41.042220  876065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 15:54:41.174015  876065 docker.go:219] disabling docker service ...
	I1114 15:54:41.174101  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 15:54:41.188849  876065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 15:54:41.201322  876065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 15:54:41.329124  876065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 15:54:41.456116  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 15:54:41.477162  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 15:54:41.497860  876065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 15:54:41.497932  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.509750  876065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 15:54:41.509843  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.521944  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.532916  876065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 15:54:41.545469  876065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 15:54:41.556976  876065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 15:54:41.567322  876065 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1114 15:54:41.567401  876065 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1114 15:54:41.583043  876065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 15:54:41.593941  876065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 15:54:41.717384  876065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 15:54:41.907278  876065 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 15:54:41.907351  876065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 15:54:41.912763  876065 start.go:540] Will wait 60s for crictl version
	I1114 15:54:41.912843  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:41.917105  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 15:54:41.965326  876065 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1114 15:54:41.965418  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.016065  876065 ssh_runner.go:195] Run: crio --version
	I1114 15:54:42.079721  876065 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1114 15:54:40.028538  876396 retry.go:31] will retry after 2.943912692s: kubelet not initialised
	I1114 15:54:42.081301  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetIP
	I1114 15:54:42.084358  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.084771  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 15:54:42.084805  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 15:54:42.085014  876065 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1114 15:54:42.089551  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:54:42.102676  876065 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 15:54:42.102730  876065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 15:54:42.145434  876065 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1114 15:54:42.145479  876065 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 15:54:42.145570  876065 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.145592  876065 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.145621  876065 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.145620  876065 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.145662  876065 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1114 15:54:42.145692  876065 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.145819  876065 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.145564  876065 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.147966  876065 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.147967  876065 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.148031  876065 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.148056  876065 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1114 15:54:42.147970  876065 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.148093  876065 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.147960  876065 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.311979  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.318368  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1114 15:54:42.318578  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.325647  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.340363  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.375378  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.473131  876065 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1114 15:54:42.473195  876065 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.473202  876065 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1114 15:54:42.473235  876065 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.473253  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.473283  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.511600  876065 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.554432  876065 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1114 15:54:42.554502  876065 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1114 15:54:42.554572  876065 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.554599  876065 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1114 15:54:42.554618  876065 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.554632  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554657  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554532  876065 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.554724  876065 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1114 15:54:42.554750  876065 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.554776  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554778  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.554907  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 15:54:42.554969  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1114 15:54:42.576922  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1114 15:54:42.577004  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1114 15:54:42.577114  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1114 15:54:42.577535  876065 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1114 15:54:42.577591  876065 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.577631  876065 ssh_runner.go:195] Run: which crictl
	I1114 15:54:42.655186  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1114 15:54:42.655318  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1114 15:54:42.655449  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1114 15:54:42.655473  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:42.655536  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.706186  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1114 15:54:42.706257  876065 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:42.706283  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1114 15:54:42.706304  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:42.706372  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:42.706408  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1114 15:54:42.706548  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:42.737003  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1114 15:54:42.737032  876065 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737093  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1114 15:54:42.737102  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1114 15:54:42.737179  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1114 15:54:42.737237  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:42.769211  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1114 15:54:42.769251  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1114 15:54:42.769304  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1114 15:54:42.769289  876065 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1114 15:54:42.769428  876065 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:54:44.006164  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (1.268897316s)
	I1114 15:54:44.006206  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1114 15:54:44.006240  876065 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.236783751s)
	I1114 15:54:44.006275  876065 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1114 15:54:44.006283  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.269163879s)
	I1114 15:54:44.006297  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1114 15:54:44.006322  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:44.006375  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1114 15:54:40.016931  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:54:40.017030  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.030798  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:40.541996  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.042023  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:41.542537  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.042880  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.542514  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:54:42.577021  876668 api_server.go:72] duration metric: took 2.560093027s to wait for apiserver process to appear ...
	I1114 15:54:42.577059  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:54:42.577088  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.577767  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:42.577805  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.578225  876668 api_server.go:269] stopped: https://192.168.61.196:8444/healthz: Get "https://192.168.61.196:8444/healthz": dial tcp 192.168.61.196:8444: connect: connection refused
	I1114 15:54:43.078953  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:42.457425  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:44.460290  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:42.978588  876396 retry.go:31] will retry after 5.776997827s: kubelet not initialised
	I1114 15:54:46.326192  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.326231  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.326249  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.390609  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:54:46.390668  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:54:46.579140  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:46.590569  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:46.590606  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.079186  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.084460  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.084483  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:47.578774  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:47.588878  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:54:47.588919  876668 api_server.go:103] status: https://192.168.61.196:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:54:48.079047  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:54:48.084809  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:54:48.098877  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:54:48.098941  876668 api_server.go:131] duration metric: took 5.521873886s to wait for apiserver health ...
	I1114 15:54:48.098955  876668 cni.go:84] Creating CNI manager for ""
	I1114 15:54:48.098972  876668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:54:48.101010  876668 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:54:47.219243  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.212835904s)
	I1114 15:54:47.219281  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1114 15:54:47.219308  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:47.219472  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1114 15:54:48.102440  876668 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:54:48.154163  876668 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:54:48.212336  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:54:48.229819  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:54:48.229862  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:54:48.229874  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:54:48.229886  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:54:48.229896  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:54:48.229905  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:54:48.229913  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:54:48.229923  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:54:48.229934  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:54:48.229944  876668 system_pods.go:74] duration metric: took 17.577706ms to wait for pod list to return data ...
	I1114 15:54:48.229961  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:54:48.236002  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:54:48.236043  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:54:48.236057  876668 node_conditions.go:105] duration metric: took 6.089691ms to run NodePressure ...
	I1114 15:54:48.236093  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:54:48.608191  876668 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622192  876668 kubeadm.go:787] kubelet initialised
	I1114 15:54:48.622221  876668 kubeadm.go:788] duration metric: took 13.999979ms waiting for restarted kubelet to initialise ...
	I1114 15:54:48.622232  876668 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:48.629670  876668 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.636566  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636594  876668 pod_ready.go:81] duration metric: took 6.892422ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.636611  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.636619  876668 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.643982  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644013  876668 pod_ready.go:81] duration metric: took 7.383826ms waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.644030  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.644037  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.649791  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649815  876668 pod_ready.go:81] duration metric: took 5.769971ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.649825  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.649833  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:48.655071  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655100  876668 pod_ready.go:81] duration metric: took 5.259243ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:48.655113  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:48.655121  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.018817  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018849  876668 pod_ready.go:81] duration metric: took 363.719341ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.018863  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-proxy-zpchs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.018872  876668 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.417556  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417588  876668 pod_ready.go:81] duration metric: took 398.704259ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.417600  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.417607  876668 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:49.816654  876668 pod_ready.go:97] node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816692  876668 pod_ready.go:81] duration metric: took 399.075859ms waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:54:49.816712  876668 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-529430" hosting pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:49.816721  876668 pod_ready.go:38] duration metric: took 1.194471296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:49.816765  876668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:54:49.830335  876668 ops.go:34] apiserver oom_adj: -16
	I1114 15:54:49.830363  876668 kubeadm.go:640] restartCluster took 21.356528166s
	I1114 15:54:49.830372  876668 kubeadm.go:406] StartCluster complete in 21.422955285s
	I1114 15:54:49.830390  876668 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.830502  876668 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:54:49.832470  876668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:54:49.859435  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:54:49.859707  876668 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:54:49.859810  876668 config.go:182] Loaded profile config "default-k8s-diff-port-529430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:54:49.859852  876668 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859873  876668 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859885  876668 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-529430"
	I1114 15:54:49.859892  876668 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-529430"
	W1114 15:54:49.859895  876668 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:54:49.859954  876668 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-529430"
	I1114 15:54:49.859973  876668 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.859981  876668 addons.go:240] addon metrics-server should already be in state true
	I1114 15:54:49.860025  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.859956  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.860306  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860345  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860438  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860452  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.860489  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.860491  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.866006  876668 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-529430" context rescaled to 1 replicas
	I1114 15:54:49.866053  876668 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.196 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:54:49.878650  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I1114 15:54:49.878976  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I1114 15:54:49.879627  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I1114 15:54:49.891649  876668 out.go:177] * Verifying Kubernetes components...
	I1114 15:54:49.893450  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:54:49.892232  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892275  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.892329  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.894259  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894282  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894473  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894486  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894610  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.894623  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.894687  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894892  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.894952  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.894993  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.895598  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.895642  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.896296  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.896321  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.899095  876668 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-529430"
	W1114 15:54:49.899120  876668 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:54:49.899151  876668 host.go:66] Checking if "default-k8s-diff-port-529430" exists ...
	I1114 15:54:49.899576  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.899622  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.917834  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I1114 15:54:49.917842  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I1114 15:54:49.918442  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.918505  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.919007  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919026  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919167  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.919187  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.919493  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919562  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.919803  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.920191  876668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:54:49.920237  876668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:54:49.922764  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I1114 15:54:49.922969  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.924925  876668 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:54:49.923380  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.926603  876668 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:49.926625  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:54:49.926647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.927991  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.928012  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.928459  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.928683  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.930696  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.930740  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.931131  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.931154  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.931330  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.931491  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.931647  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.931775  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.934128  876668 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:54:49.936007  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:54:49.936031  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:54:49.936056  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.939725  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.939782  876668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I1114 15:54:49.940336  876668 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:54:49.940442  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.940467  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.940822  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.941060  876668 main.go:141] libmachine: Using API Version  1
	I1114 15:54:49.941093  876668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:54:49.941095  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.941211  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.941388  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:49.941856  876668 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:54:49.942057  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetState
	I1114 15:54:49.943639  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .DriverName
	I1114 15:54:49.943972  876668 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:49.943991  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:54:49.944009  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHHostname
	I1114 15:54:49.947172  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947631  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:13:ce", ip: ""} in network mk-default-k8s-diff-port-529430: {Iface:virbr4 ExpiryTime:2023-11-14 16:54:12 +0000 UTC Type:0 Mac:52:54:00:ee:13:ce Iaid: IPaddr:192.168.61.196 Prefix:24 Hostname:default-k8s-diff-port-529430 Clientid:01:52:54:00:ee:13:ce}
	I1114 15:54:49.947663  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | domain default-k8s-diff-port-529430 has defined IP address 192.168.61.196 and MAC address 52:54:00:ee:13:ce in network mk-default-k8s-diff-port-529430
	I1114 15:54:49.947902  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHPort
	I1114 15:54:49.948102  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHKeyPath
	I1114 15:54:49.948278  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .GetSSHUsername
	I1114 15:54:49.948579  876668 sshutil.go:53] new ssh client: &{IP:192.168.61.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/default-k8s-diff-port-529430/id_rsa Username:docker}
	I1114 15:54:46.955010  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:48.955172  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:50.066801  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:54:50.084526  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:54:50.084555  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:54:50.145315  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:54:50.145671  876668 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:50.146084  876668 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 15:54:50.151627  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:54:50.151646  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:54:50.216318  876668 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:50.216349  876668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:54:50.316434  876668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:54:51.787528  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.642164298s)
	I1114 15:54:51.787644  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787672  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.787695  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.720847981s)
	I1114 15:54:51.787744  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.787761  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788039  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788064  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788075  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788086  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.788094  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.788109  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.788119  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.788128  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790245  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.790294  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790322  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.790327  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.790349  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.803844  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.803875  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.804205  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.804238  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.804239  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.925929  876668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.609443677s)
	I1114 15:54:51.926001  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926019  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926385  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926429  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:51.926456  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926468  876668 main.go:141] libmachine: Making call to close driver server
	I1114 15:54:51.926483  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) Calling .Close
	I1114 15:54:51.926795  876668 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:54:51.926814  876668 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:54:51.926826  876668 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-529430"
	I1114 15:54:51.926829  876668 main.go:141] libmachine: (default-k8s-diff-port-529430) DBG | Closing plugin on server side
	I1114 15:54:52.146969  876668 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1114 15:54:48.761692  876396 retry.go:31] will retry after 7.067385779s: kubelet not initialised
	I1114 15:54:50.000157  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.780649338s)
	I1114 15:54:50.000194  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1114 15:54:50.000227  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:50.000281  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1114 15:54:52.291215  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (2.290903759s)
	I1114 15:54:52.291244  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1114 15:54:52.291271  876065 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:52.291312  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1114 15:54:53.739008  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.447671823s)
	I1114 15:54:53.739041  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1114 15:54:53.739066  876065 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:53.739126  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1114 15:54:52.194351  876668 addons.go:502] enable addons completed in 2.33463136s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1114 15:54:52.220203  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:54.220773  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:50.957159  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:53.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.458026  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:55.834422  876396 retry.go:31] will retry after 18.847542128s: kubelet not initialised
	I1114 15:54:56.221753  876668 node_ready.go:58] node "default-k8s-diff-port-529430" has status "Ready":"False"
	I1114 15:54:56.720961  876668 node_ready.go:49] node "default-k8s-diff-port-529430" has status "Ready":"True"
	I1114 15:54:56.720989  876668 node_ready.go:38] duration metric: took 6.575288694s waiting for node "default-k8s-diff-port-529430" to be "Ready" ...
	I1114 15:54:56.721001  876668 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:54:56.730382  876668 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736722  876668 pod_ready.go:92] pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace has status "Ready":"True"
	I1114 15:54:56.736761  876668 pod_ready.go:81] duration metric: took 6.345209ms waiting for pod "coredns-5dd5756b68-b8szg" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:56.736774  876668 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:54:58.773825  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:57.458580  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:54:59.956188  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:01.061681  876065 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.322513643s)
	I1114 15:55:01.061716  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1114 15:55:01.061753  876065 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.061812  876065 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1114 15:55:01.811277  876065 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17598-824991/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1114 15:55:01.811342  876065 cache_images.go:123] Successfully loaded all cached images
	I1114 15:55:01.811352  876065 cache_images.go:92] LoadImages completed in 19.665858366s
	I1114 15:55:01.811461  876065 ssh_runner.go:195] Run: crio config
	I1114 15:55:01.881576  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:01.881603  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:01.881622  876065 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 15:55:01.881646  876065 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-490998 NodeName:no-preload-490998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 15:55:01.881781  876065 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-490998"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 15:55:01.881859  876065 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-490998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 15:55:01.881918  876065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 15:55:01.892613  876065 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 15:55:01.892696  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 15:55:01.902267  876065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1114 15:55:01.919728  876065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 15:55:01.936188  876065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1114 15:55:01.954510  876065 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I1114 15:55:01.958337  876065 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 15:55:01.970290  876065 certs.go:56] Setting up /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998 for IP: 192.168.50.251
	I1114 15:55:01.970328  876065 certs.go:190] acquiring lock for shared ca certs: {Name:mkb9015cecd3cab037cb1158c96589066c7a282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:55:01.970513  876065 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key
	I1114 15:55:01.970563  876065 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key
	I1114 15:55:01.970662  876065 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/client.key
	I1114 15:55:01.970794  876065 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key.6b358a63
	I1114 15:55:01.970857  876065 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key
	I1114 15:55:01.971003  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem (1338 bytes)
	W1114 15:55:01.971065  876065 certs.go:433] ignoring /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211_empty.pem, impossibly tiny 0 bytes
	I1114 15:55:01.971079  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 15:55:01.971123  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/ca.pem (1082 bytes)
	I1114 15:55:01.971160  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/cert.pem (1123 bytes)
	I1114 15:55:01.971192  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/certs/home/jenkins/minikube-integration/17598-824991/.minikube/certs/key.pem (1675 bytes)
	I1114 15:55:01.971252  876065 certs.go:437] found cert: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem (1708 bytes)
	I1114 15:55:01.972129  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 15:55:01.996012  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 15:55:02.020778  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 15:55:02.044395  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 15:55:02.066866  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 15:55:02.089331  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 15:55:02.113148  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 15:55:02.136083  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 15:55:02.157833  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 15:55:02.181150  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/certs/832211.pem --> /usr/share/ca-certificates/832211.pem (1338 bytes)
	I1114 15:55:02.203155  876065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/ssl/certs/8322112.pem --> /usr/share/ca-certificates/8322112.pem (1708 bytes)
	I1114 15:55:02.225839  876065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 15:55:02.243335  876065 ssh_runner.go:195] Run: openssl version
	I1114 15:55:02.249465  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8322112.pem && ln -fs /usr/share/ca-certificates/8322112.pem /etc/ssl/certs/8322112.pem"
	I1114 15:55:02.259874  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264340  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 14:48 /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.264401  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8322112.pem
	I1114 15:55:02.270441  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8322112.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 15:55:02.282031  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 15:55:02.293297  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298093  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 14:39 /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.298155  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 15:55:02.303668  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 15:55:02.315423  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/832211.pem && ln -fs /usr/share/ca-certificates/832211.pem /etc/ssl/certs/832211.pem"
	I1114 15:55:02.325976  876065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332124  876065 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 14:48 /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.332194  876065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/832211.pem
	I1114 15:55:02.339377  876065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/832211.pem /etc/ssl/certs/51391683.0"
	I1114 15:55:02.350318  876065 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 15:55:02.354796  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 15:55:02.360867  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 15:55:02.366306  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 15:55:02.372186  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 15:55:02.377900  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 15:55:02.383519  876065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 15:55:02.389128  876065 kubeadm.go:404] StartCluster: {Name:no-preload-490998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-490998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 15:55:02.389229  876065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 15:55:02.389304  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:02.428473  876065 cri.go:89] found id: ""
	I1114 15:55:02.428578  876065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 15:55:02.439944  876065 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 15:55:02.439969  876065 kubeadm.go:636] restartCluster start
	I1114 15:55:02.440079  876065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 15:55:02.450025  876065 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.451533  876065 kubeconfig.go:92] found "no-preload-490998" server: "https://192.168.50.251:8443"
	I1114 15:55:02.454290  876065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 15:55:02.463352  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.463410  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.474007  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.474025  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.474065  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.484826  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:02.985519  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:02.985595  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:02.998224  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.485905  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.486059  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.499281  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:03.985805  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:03.985925  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:03.998086  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:00.819591  876668 pod_ready.go:102] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:02.773550  876668 pod_ready.go:92] pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.773573  876668 pod_ready.go:81] duration metric: took 6.036790568s waiting for pod "etcd-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.773582  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778746  876668 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.778764  876668 pod_ready.go:81] duration metric: took 5.176465ms waiting for pod "kube-apiserver-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.778772  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784332  876668 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.784353  876668 pod_ready.go:81] duration metric: took 5.572815ms waiting for pod "kube-controller-manager-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.784366  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789492  876668 pod_ready.go:92] pod "kube-proxy-zpchs" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.789514  876668 pod_ready.go:81] duration metric: took 5.139759ms waiting for pod "kube-proxy-zpchs" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.789524  876668 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796606  876668 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:02.796628  876668 pod_ready.go:81] duration metric: took 7.097079ms waiting for pod "kube-scheduler-default-k8s-diff-port-529430" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.796639  876668 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:02.454894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.956449  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:04.485284  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.485387  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.498240  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:04.985846  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:04.985936  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:04.998901  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.485250  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.485365  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.497261  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.985411  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:05.985511  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:05.997656  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.485227  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.485332  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.497310  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:06.985893  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:06.985977  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:06.997585  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.485903  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.486001  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.498532  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:07.985881  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:07.985958  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:07.997898  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.485400  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.485512  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.497446  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:08.985912  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:08.986015  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:08.998121  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:05.081742  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:07.082515  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.580987  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:06.957307  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.455227  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:09.485641  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.485735  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.498347  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:09.985970  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:09.986073  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:09.997958  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.485503  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.485600  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.497407  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:10.985577  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:10.985655  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:10.998624  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.485146  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.485250  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.497837  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:11.985423  876065 api_server.go:166] Checking apiserver status ...
	I1114 15:55:11.985551  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 15:55:11.997959  876065 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 15:55:12.464381  876065 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 15:55:12.464449  876065 kubeadm.go:1128] stopping kube-system containers ...
	I1114 15:55:12.464478  876065 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1114 15:55:12.464582  876065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 15:55:12.505435  876065 cri.go:89] found id: ""
	I1114 15:55:12.505532  876065 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 15:55:12.522470  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:55:12.532890  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:55:12.532982  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542115  876065 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 15:55:12.542141  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:12.684875  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:13.897464  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.21254145s)
	I1114 15:55:13.897509  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:11.582332  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.085102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:11.955438  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.455506  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:14.687822  876396 kubeadm.go:787] kubelet initialised
	I1114 15:55:14.687849  876396 kubeadm.go:788] duration metric: took 43.622781532s waiting for restarted kubelet to initialise ...
	I1114 15:55:14.687857  876396 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:14.693560  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698796  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.698819  876396 pod_ready.go:81] duration metric: took 5.232669ms waiting for pod "coredns-5644d7b6d9-dxtd8" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.698828  876396 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703879  876396 pod_ready.go:92] pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.703903  876396 pod_ready.go:81] duration metric: took 5.067006ms waiting for pod "coredns-5644d7b6d9-jpwgp" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.703916  876396 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708064  876396 pod_ready.go:92] pod "etcd-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.708093  876396 pod_ready.go:81] duration metric: took 4.168333ms waiting for pod "etcd-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.708106  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713030  876396 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:14.713055  876396 pod_ready.go:81] duration metric: took 4.939899ms waiting for pod "kube-apiserver-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.713067  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087824  876396 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.087857  876396 pod_ready.go:81] duration metric: took 374.780312ms waiting for pod "kube-controller-manager-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.087873  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.486984  876396 pod_ready.go:92] pod "kube-proxy-kw2ns" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.487011  876396 pod_ready.go:81] duration metric: took 399.130772ms waiting for pod "kube-proxy-kw2ns" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.487020  876396 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886624  876396 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:15.886658  876396 pod_ready.go:81] duration metric: took 399.628757ms waiting for pod "kube-scheduler-old-k8s-version-842105" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:15.886671  876396 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:14.096314  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.174495  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:14.254647  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:55:14.254765  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.273596  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:14.788350  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.288506  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:15.788580  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.288476  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.787853  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:55:16.816380  876065 api_server.go:72] duration metric: took 2.561735945s to wait for apiserver process to appear ...
	I1114 15:55:16.816408  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:55:16.816428  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:16.582309  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:18.584599  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:16.957605  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:19.457613  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.541438  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.541473  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:20.541490  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:20.582790  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 15:55:20.582838  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 15:55:21.083891  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.089625  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.089658  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:21.583184  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:21.599539  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 15:55:21.599576  876065 api_server.go:103] status: https://192.168.50.251:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 15:55:22.083098  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 15:55:22.088480  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 15:55:22.096517  876065 api_server.go:141] control plane version: v1.28.3
	I1114 15:55:22.096545  876065 api_server.go:131] duration metric: took 5.280130119s to wait for apiserver health ...
	I1114 15:55:22.096558  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:55:22.096568  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:55:22.098612  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:55:18.194723  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:20.195126  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.196472  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:22.100184  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:55:22.125049  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:55:22.150893  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:55:22.163922  876065 system_pods.go:59] 8 kube-system pods found
	I1114 15:55:22.163958  876065 system_pods.go:61] "coredns-5dd5756b68-n77fz" [e2f5ce73-a65e-40da-b554-c929f093a1a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:55:22.163970  876065 system_pods.go:61] "etcd-no-preload-490998" [01e272b5-4463-431d-8ed1-f561a90b667d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 15:55:22.163983  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [529f79fd-eae5-44e9-971d-b3ecb5ed025d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 15:55:22.163989  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ea299234-2456-4171-bac0-8e8ff4998596] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 15:55:22.163994  876065 system_pods.go:61] "kube-proxy-6hqk5" [7233dd72-138c-4148-834b-2dcb83a4cf00] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:55:22.163999  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [666e8a03-50b1-4b08-84f3-c3c6ec8a5452] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 15:55:22.164005  876065 system_pods.go:61] "metrics-server-57f55c9bc5-6lg6h" [7afa1e38-c64c-4d03-9b00-5765e7e251ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:55:22.164036  876065 system_pods.go:61] "storage-provisioner" [1090ed8a-6424-4980-9ea7-b43e998d1eb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:55:22.164050  876065 system_pods.go:74] duration metric: took 13.132475ms to wait for pod list to return data ...
	I1114 15:55:22.164058  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:55:22.167930  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:55:22.168020  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 15:55:22.168033  876065 node_conditions.go:105] duration metric: took 3.969303ms to run NodePressure ...
	I1114 15:55:22.168055  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 15:55:22.456975  876065 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470174  876065 kubeadm.go:787] kubelet initialised
	I1114 15:55:22.470202  876065 kubeadm.go:788] duration metric: took 13.201285ms waiting for restarted kubelet to initialise ...
	I1114 15:55:22.470216  876065 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:55:22.483150  876065 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:21.081628  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:23.083015  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:21.955808  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.455829  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.696004  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.195514  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:24.514847  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.519442  876065 pod_ready.go:102] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:27.013526  876065 pod_ready.go:92] pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:27.013584  876065 pod_ready.go:81] duration metric: took 4.530407487s waiting for pod "coredns-5dd5756b68-n77fz" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:27.013600  876065 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:29.032979  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:25.582366  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.080716  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:26.456123  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:28.955087  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:29.694646  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.194401  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:31.033810  876065 pod_ready.go:102] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:33.033026  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.033058  876065 pod_ready.go:81] duration metric: took 6.019448696s waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.033071  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039148  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.039180  876065 pod_ready.go:81] duration metric: took 6.099138ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.039194  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049651  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.049675  876065 pod_ready.go:81] duration metric: took 10.473938ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.049685  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061928  876065 pod_ready.go:92] pod "kube-proxy-6hqk5" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.061971  876065 pod_ready.go:81] duration metric: took 12.277038ms waiting for pod "kube-proxy-6hqk5" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.061984  876065 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071422  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 15:55:33.071452  876065 pod_ready.go:81] duration metric: took 9.456301ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:33.071465  876065 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	I1114 15:55:30.081625  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.082675  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.581547  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:30.955154  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:32.957772  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.454775  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:34.194959  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:36.195495  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:35.339391  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.340404  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.083295  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.584210  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:37.455343  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.956659  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:38.696669  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.194485  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:39.838699  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:41.840605  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.081223  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.081468  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:42.454630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.455871  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:43.195172  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:45.195687  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:44.339878  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.838910  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.841677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.082382  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.582248  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:46.457525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:48.955133  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:47.695467  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.195263  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.339284  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.340315  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:51.082546  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.581238  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:50.955630  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:53.454502  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.455395  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:52.694030  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:54.694593  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:56.695136  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.838685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.838864  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:55.581986  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.582037  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.582635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:57.955377  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.963166  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:01.195573  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:55:59.840578  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.338828  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.082323  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.582531  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:02.454214  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.454975  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:03.198457  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:05.694675  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:04.339632  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.340001  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.840358  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:07.082081  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:09.582483  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:06.455257  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.455373  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.457344  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:08.196641  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:10.693989  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.339845  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:13.839805  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:11.583615  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:14.083682  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.957092  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.456347  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:12.694792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:15.200049  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.339768  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:18.839853  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:16.583278  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:19.081994  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.954665  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.454724  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:17.697859  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.194201  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.194738  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:20.840457  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.339880  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:21.082759  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:23.581646  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:22.457299  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.954029  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:24.694448  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.696563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:25.342126  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:27.839304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.083724  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:28.582086  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:26.955572  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.459642  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:29.194785  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.693765  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:30.339130  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:32.339361  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.083363  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.582213  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:31.955312  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.955576  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:33.694783  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.195019  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:34.339538  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.839469  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.842444  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.081206  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.581263  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:36.457091  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.956262  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:38.195134  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:40.195875  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.343304  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.839634  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.080021  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.081543  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:41.453768  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:43.455182  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.457368  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:42.694667  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.195018  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.197081  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:46.338815  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:48.339683  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:45.083139  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.582320  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:47.954718  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.455135  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:49.696028  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.194484  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.340708  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.845026  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:50.082635  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.583485  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:52.455840  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.955079  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:54.194627  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.197158  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.338956  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.339983  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:55.081903  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:57.583102  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:56.955380  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.956134  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:58.695165  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.196563  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:56:59.340299  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.838688  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.839025  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:00.080983  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:02.582197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:04.583222  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:01.454473  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.455187  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.455628  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:03.694518  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.695324  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:05.839239  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.341567  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.081736  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.581889  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:07.954781  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:09.954913  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:08.194118  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.194688  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:12.195198  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:10.840317  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.582436  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:13.583580  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:11.955894  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.459525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:14.195588  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.195922  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:15.339470  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:17.340059  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.081770  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.082006  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:16.954957  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.455211  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:18.695530  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.193801  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:19.839618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.839819  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:20.083348  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:22.581010  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.582114  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:21.958579  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.454848  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:23.196520  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:25.196779  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:24.339942  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.340928  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.841122  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.583453  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:29.082667  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:26.455784  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:28.954086  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:27.695279  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.194416  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.341608  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.343898  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:31.581417  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.583852  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:30.955148  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:33.455525  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:32.693640  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:34.695191  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.194999  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.838294  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:37.838948  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:36.082181  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.582488  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:35.955108  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:38.454392  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:40.455291  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.195193  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.694849  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:39.839180  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.339359  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:41.081697  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:43.081876  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:42.455905  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.962584  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.194494  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:46.195239  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:44.840607  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.338846  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:45.582002  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.083197  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:47.454539  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.455025  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:48.694661  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.695232  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:49.840392  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.338628  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:50.580410  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:52.580961  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.581502  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:51.954903  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.454053  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:53.194450  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:55.196537  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:54.339997  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.839677  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.080798  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.087078  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:56.454639  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:58.955200  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:57.696210  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:00.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:02.194961  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:57:59.339152  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.340037  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.838551  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.582808  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.084331  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:01.458365  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:03.955679  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:04.696770  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:07.195364  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:05.840151  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.340709  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.582153  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.083260  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:06.454599  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:08.458281  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:09.196674  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.696022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.839588  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.342479  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:11.583479  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:14.081451  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:10.954623  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:13.455233  876220 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.147383  876220 pod_ready.go:81] duration metric: took 4m0.000589332s waiting for pod "metrics-server-57f55c9bc5-gvtbw" in "kube-system" namespace to be "Ready" ...
	E1114 15:58:15.147416  876220 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:58:15.147446  876220 pod_ready.go:38] duration metric: took 4m11.626263996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:15.147477  876220 kubeadm.go:640] restartCluster took 4m32.524775831s
	W1114 15:58:15.147587  876220 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:58:15.147630  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:58:14.195824  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.696055  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:15.841115  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.341347  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:16.084839  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.582575  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:18.696792  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.194869  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:20.838749  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:22.840049  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:21.080598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.081173  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:23.694974  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:26.196317  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.340015  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.839312  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:25.081700  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:27.582564  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.582728  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:29.037182  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.889530708s)
	I1114 15:58:29.037253  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:29.052797  876220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:58:29.061624  876220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:58:29.070799  876220 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:58:29.070848  876220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:58:29.303905  876220 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:58:28.695122  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.696046  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:30.341383  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:32.341988  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:31.584191  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.082795  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:33.195568  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:35.695145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:34.839094  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.840873  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:36.086791  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:38.581233  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.234828  876220 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:58:40.234881  876220 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:58:40.234965  876220 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:58:40.235127  876220 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:58:40.235264  876220 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:58:40.235361  876220 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:58:40.237159  876220 out.go:204]   - Generating certificates and keys ...
	I1114 15:58:40.237276  876220 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:58:40.237366  876220 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:58:40.237511  876220 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:58:40.237608  876220 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:58:40.237697  876220 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:58:40.237791  876220 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:58:40.237883  876220 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:58:40.237975  876220 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:58:40.238066  876220 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:58:40.238161  876220 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:58:40.238213  876220 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:58:40.238283  876220 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:58:40.238352  876220 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:58:40.238422  876220 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:58:40.238506  876220 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:58:40.238582  876220 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:58:40.238725  876220 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:58:40.238816  876220 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:58:40.240266  876220 out.go:204]   - Booting up control plane ...
	I1114 15:58:40.240404  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:58:40.240508  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:58:40.240593  876220 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:58:40.240822  876220 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:58:40.240958  876220 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:58:40.241018  876220 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:58:40.241226  876220 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:58:40.241333  876220 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.509675 seconds
	I1114 15:58:40.241470  876220 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:58:40.241658  876220 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:58:40.241744  876220 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:58:40.241979  876220 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-279880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:58:40.242054  876220 kubeadm.go:322] [bootstrap-token] Using token: 2hujph.0fcw82xd7gxidhsk
	I1114 15:58:40.243677  876220 out.go:204]   - Configuring RBAC rules ...
	I1114 15:58:40.243823  876220 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:58:40.243942  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:58:40.244131  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:58:40.244252  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:58:40.244351  876220 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:58:40.244464  876220 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:58:40.244616  876220 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:58:40.244673  876220 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:58:40.244732  876220 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:58:40.244762  876220 kubeadm.go:322] 
	I1114 15:58:40.244828  876220 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:58:40.244835  876220 kubeadm.go:322] 
	I1114 15:58:40.244904  876220 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:58:40.244913  876220 kubeadm.go:322] 
	I1114 15:58:40.244934  876220 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:58:40.244982  876220 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:58:40.245027  876220 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:58:40.245033  876220 kubeadm.go:322] 
	I1114 15:58:40.245108  876220 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:58:40.245128  876220 kubeadm.go:322] 
	I1114 15:58:40.245185  876220 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:58:40.245195  876220 kubeadm.go:322] 
	I1114 15:58:40.245269  876220 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:58:40.245376  876220 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:58:40.245483  876220 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:58:40.245493  876220 kubeadm.go:322] 
	I1114 15:58:40.245606  876220 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:58:40.245700  876220 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:58:40.245708  876220 kubeadm.go:322] 
	I1114 15:58:40.245828  876220 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.245986  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:58:40.246023  876220 kubeadm.go:322] 	--control-plane 
	I1114 15:58:40.246036  876220 kubeadm.go:322] 
	I1114 15:58:40.246148  876220 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:58:40.246158  876220 kubeadm.go:322] 
	I1114 15:58:40.246247  876220 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2hujph.0fcw82xd7gxidhsk \
	I1114 15:58:40.246364  876220 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:58:40.246386  876220 cni.go:84] Creating CNI manager for ""
	I1114 15:58:40.246394  876220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:58:40.248160  876220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:58:40.249669  876220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:58:40.299570  876220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:58:40.399662  876220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:58:40.399751  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=embed-certs-279880 minikube.k8s.io/updated_at=2023_11_14T15_58_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.399759  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.456044  876220 ops.go:34] apiserver oom_adj: -16
	I1114 15:58:40.674206  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:40.780887  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:37.695540  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.195681  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:39.338902  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.339264  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.339844  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:40.582722  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:43.082401  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:41.391744  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:41.892060  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.392311  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.892385  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.391523  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:43.892286  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.392103  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:44.891494  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:45.392324  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:42.695415  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.195275  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.842259  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.339758  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.582481  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:48.079990  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:45.891330  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.391723  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:46.892283  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.391436  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.891664  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.392116  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:48.892052  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.391957  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:49.892316  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:50.391756  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:47.696088  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.195252  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.195695  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.891614  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.391818  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:51.891371  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.391565  876220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:58:52.544346  876220 kubeadm.go:1081] duration metric: took 12.144659895s to wait for elevateKubeSystemPrivileges.
	I1114 15:58:52.544391  876220 kubeadm.go:406] StartCluster complete in 5m9.978264522s
	I1114 15:58:52.544428  876220 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.544541  876220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:58:52.547345  876220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:58:52.547635  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:58:52.547785  876220 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:58:52.547873  876220 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-279880"
	I1114 15:58:52.547886  876220 addons.go:69] Setting default-storageclass=true in profile "embed-certs-279880"
	I1114 15:58:52.547903  876220 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-279880"
	I1114 15:58:52.547907  876220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-279880"
	W1114 15:58:52.547915  876220 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:58:52.547951  876220 config.go:182] Loaded profile config "embed-certs-279880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:58:52.547986  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548010  876220 addons.go:69] Setting metrics-server=true in profile "embed-certs-279880"
	I1114 15:58:52.548027  876220 addons.go:231] Setting addon metrics-server=true in "embed-certs-279880"
	W1114 15:58:52.548038  876220 addons.go:240] addon metrics-server should already be in state true
	I1114 15:58:52.548083  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548508  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548612  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.548478  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.548844  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.568396  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I1114 15:58:52.568429  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I1114 15:58:52.568402  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I1114 15:58:52.569005  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569019  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569009  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.569581  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569612  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.569772  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.569796  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.570042  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570183  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.570252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.570699  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.570718  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.570742  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.570723  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.571364  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.571943  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.571975  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.575936  876220 addons.go:231] Setting addon default-storageclass=true in "embed-certs-279880"
	W1114 15:58:52.575961  876220 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:58:52.575996  876220 host.go:66] Checking if "embed-certs-279880" exists ...
	I1114 15:58:52.576368  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.576412  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.588007  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I1114 15:58:52.588767  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.589487  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.589505  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.589943  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.590164  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.591841  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I1114 15:58:52.592269  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.592610  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.594453  876220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:58:52.593100  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.594839  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I1114 15:58:52.595836  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:58:52.595856  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:58:52.595874  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.595879  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.596356  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.596654  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.596683  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.597179  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.597199  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.597596  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.598225  876220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:58:52.598250  876220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:58:52.598972  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599389  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.599412  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.599655  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.599823  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.599971  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.600085  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.601301  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.603202  876220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:58:52.604691  876220 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.604701  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:58:52.604714  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.607585  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.607911  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.607942  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.608138  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.608309  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.608450  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.608586  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.614716  876220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I1114 15:58:52.615047  876220 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:58:52.615462  876220 main.go:141] libmachine: Using API Version  1
	I1114 15:58:52.615503  876220 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:58:52.615849  876220 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:58:52.616006  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetState
	I1114 15:58:52.617386  876220 main.go:141] libmachine: (embed-certs-279880) Calling .DriverName
	I1114 15:58:52.617630  876220 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.617647  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:58:52.617666  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHHostname
	I1114 15:58:52.620337  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620656  876220 main.go:141] libmachine: (embed-certs-279880) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:2f:80", ip: ""} in network mk-embed-certs-279880: {Iface:virbr3 ExpiryTime:2023-11-14 16:45:14 +0000 UTC Type:0 Mac:52:54:00:50:2f:80 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:embed-certs-279880 Clientid:01:52:54:00:50:2f:80}
	I1114 15:58:52.620700  876220 main.go:141] libmachine: (embed-certs-279880) DBG | domain embed-certs-279880 has defined IP address 192.168.39.147 and MAC address 52:54:00:50:2f:80 in network mk-embed-certs-279880
	I1114 15:58:52.620951  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHPort
	I1114 15:58:52.621103  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHKeyPath
	I1114 15:58:52.621252  876220 main.go:141] libmachine: (embed-certs-279880) Calling .GetSSHUsername
	I1114 15:58:52.621374  876220 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/embed-certs-279880/id_rsa Username:docker}
	I1114 15:58:52.636800  876220 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-279880" context rescaled to 1 replicas
	I1114 15:58:52.636844  876220 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:58:52.638665  876220 out.go:177] * Verifying Kubernetes components...
	I1114 15:58:50.340524  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.341233  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:50.080611  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.081851  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.582577  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:52.640094  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:52.829938  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:58:52.840140  876220 node_ready.go:35] waiting up to 6m0s for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.840653  876220 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:58:52.858164  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:58:52.877415  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:58:52.877448  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:58:52.900588  876220 node_ready.go:49] node "embed-certs-279880" has status "Ready":"True"
	I1114 15:58:52.900614  876220 node_ready.go:38] duration metric: took 60.432125ms waiting for node "embed-certs-279880" to be "Ready" ...
	I1114 15:58:52.900624  876220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:52.972955  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:53.009532  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:58:53.009564  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:58:53.064247  876220 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:53.064283  876220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:58:53.168472  876220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:58:54.543952  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.713966912s)
	I1114 15:58:54.544016  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544029  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544312  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544332  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.544343  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.544374  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.544650  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.544697  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:54.569577  876220 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.728879408s)
	I1114 15:58:54.569603  876220 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1114 15:58:54.572090  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:54.572118  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:54.572396  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:54.572420  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063126  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.20491351s)
	I1114 15:58:55.063197  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063218  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063551  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063572  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.063583  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.063596  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.063609  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.063888  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.063910  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.228754  876220 pod_ready.go:102] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:55.671980  876220 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.503435235s)
	I1114 15:58:55.672050  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672066  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672415  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672481  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672502  876220 main.go:141] libmachine: Making call to close driver server
	I1114 15:58:55.672514  876220 main.go:141] libmachine: (embed-certs-279880) Calling .Close
	I1114 15:58:55.672544  876220 main.go:141] libmachine: (embed-certs-279880) DBG | Closing plugin on server side
	I1114 15:58:55.672777  876220 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:58:55.672795  876220 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:58:55.672807  876220 addons.go:467] Verifying addon metrics-server=true in "embed-certs-279880"
	I1114 15:58:55.674712  876220 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:58:55.676182  876220 addons.go:502] enable addons completed in 3.128402943s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:58:54.695084  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.696106  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:54.844023  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:57.338618  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:56.660605  876220 pod_ready.go:92] pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.660642  876220 pod_ready.go:81] duration metric: took 3.687643856s waiting for pod "coredns-5dd5756b68-2kj42" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.660659  876220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671773  876220 pod_ready.go:92] pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.671803  876220 pod_ready.go:81] duration metric: took 11.134131ms waiting for pod "coredns-5dd5756b68-42nzn" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.671817  876220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679179  876220 pod_ready.go:92] pod "etcd-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.679212  876220 pod_ready.go:81] duration metric: took 7.385218ms waiting for pod "etcd-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.679224  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691696  876220 pod_ready.go:92] pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.691721  876220 pod_ready.go:81] duration metric: took 12.488161ms waiting for pod "kube-apiserver-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.691734  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704134  876220 pod_ready.go:92] pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:56.704153  876220 pod_ready.go:81] duration metric: took 12.411686ms waiting for pod "kube-controller-manager-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:56.704161  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950181  876220 pod_ready.go:92] pod "kube-proxy-qdppd" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:57.950213  876220 pod_ready.go:81] duration metric: took 1.246044532s waiting for pod "kube-proxy-qdppd" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:57.950226  876220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237122  876220 pod_ready.go:92] pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace has status "Ready":"True"
	I1114 15:58:58.237150  876220 pod_ready.go:81] duration metric: took 286.915812ms waiting for pod "kube-scheduler-embed-certs-279880" in "kube-system" namespace to be "Ready" ...
	I1114 15:58:58.237158  876220 pod_ready.go:38] duration metric: took 5.336525686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:58:58.237177  876220 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:58:58.237227  876220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:58:58.260115  876220 api_server.go:72] duration metric: took 5.623228202s to wait for apiserver process to appear ...
	I1114 15:58:58.260147  876220 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:58:58.260169  876220 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1114 15:58:58.265361  876220 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1114 15:58:58.266889  876220 api_server.go:141] control plane version: v1.28.3
	I1114 15:58:58.266918  876220 api_server.go:131] duration metric: took 6.76351ms to wait for apiserver health ...
	I1114 15:58:58.266938  876220 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:58:58.439329  876220 system_pods.go:59] 9 kube-system pods found
	I1114 15:58:58.439362  876220 system_pods.go:61] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.439367  876220 system_pods.go:61] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.439371  876220 system_pods.go:61] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.439375  876220 system_pods.go:61] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.439379  876220 system_pods.go:61] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.439383  876220 system_pods.go:61] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.439387  876220 system_pods.go:61] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.439395  876220 system_pods.go:61] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.439402  876220 system_pods.go:61] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.439412  876220 system_pods.go:74] duration metric: took 172.465662ms to wait for pod list to return data ...
	I1114 15:58:58.439426  876220 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:58:58.637240  876220 default_sa.go:45] found service account: "default"
	I1114 15:58:58.637269  876220 default_sa.go:55] duration metric: took 197.834816ms for default service account to be created ...
	I1114 15:58:58.637278  876220 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:58:58.840945  876220 system_pods.go:86] 9 kube-system pods found
	I1114 15:58:58.840976  876220 system_pods.go:89] "coredns-5dd5756b68-2kj42" [9c290848-a9d3-48c2-8f26-22295a543f22] Running
	I1114 15:58:58.840984  876220 system_pods.go:89] "coredns-5dd5756b68-42nzn" [88175e14-09c2-4dc2-a56a-fa3bf71ae420] Running
	I1114 15:58:58.840990  876220 system_pods.go:89] "etcd-embed-certs-279880" [cd6ef8ea-1ab3-4962-b02d-5723322d786a] Running
	I1114 15:58:58.840996  876220 system_pods.go:89] "kube-apiserver-embed-certs-279880" [75224fe4-4d93-4b09-bd19-6644a5f6d05c] Running
	I1114 15:58:58.841001  876220 system_pods.go:89] "kube-controller-manager-embed-certs-279880" [025c7cde-2e92-4779-be95-ac11bd47f666] Running
	I1114 15:58:58.841008  876220 system_pods.go:89] "kube-proxy-qdppd" [ddcb6130-1e2c-49b0-99de-b6b7d576d82c] Running
	I1114 15:58:58.841014  876220 system_pods.go:89] "kube-scheduler-embed-certs-279880" [74025280-9310-428d-84ed-46e2a472d13e] Running
	I1114 15:58:58.841024  876220 system_pods.go:89] "metrics-server-57f55c9bc5-g5wh5" [e51d7d56-4203-404c-ac65-4b1e65ac4ad3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:58:58.841032  876220 system_pods.go:89] "storage-provisioner" [3168b6ac-f288-4e1d-a4ce-78c4198debba] Running
	I1114 15:58:58.841046  876220 system_pods.go:126] duration metric: took 203.761925ms to wait for k8s-apps to be running ...
	I1114 15:58:58.841058  876220 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:58:58.841143  876220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:58:58.857376  876220 system_svc.go:56] duration metric: took 16.307402ms WaitForService to wait for kubelet.
	I1114 15:58:58.857414  876220 kubeadm.go:581] duration metric: took 6.220529321s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:58:58.857439  876220 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:58:59.036083  876220 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:58:59.036112  876220 node_conditions.go:123] node cpu capacity is 2
	I1114 15:58:59.036123  876220 node_conditions.go:105] duration metric: took 178.67985ms to run NodePressure ...
	I1114 15:58:59.036136  876220 start.go:228] waiting for startup goroutines ...
	I1114 15:58:59.036142  876220 start.go:233] waiting for cluster config update ...
	I1114 15:58:59.036152  876220 start.go:242] writing updated cluster config ...
	I1114 15:58:59.036464  876220 ssh_runner.go:195] Run: rm -f paused
	I1114 15:58:59.092065  876220 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:58:59.093827  876220 out.go:177] * Done! kubectl is now configured to use "embed-certs-279880" cluster and "default" namespace by default
	I1114 15:58:57.082065  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.082525  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:58.696271  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.195205  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:58:59.339863  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.839918  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:01.582598  876668 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:02.796920  876668 pod_ready.go:81] duration metric: took 4m0.000259164s waiting for pod "metrics-server-57f55c9bc5-ss2ks" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:02.796965  876668 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:02.796978  876668 pod_ready.go:38] duration metric: took 4m6.075965552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:02.796999  876668 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:02.797042  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:02.797123  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:02.851170  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:02.851199  876668 cri.go:89] found id: ""
	I1114 15:59:02.851210  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:02.851271  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.857251  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:02.857323  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:02.904914  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:02.904939  876668 cri.go:89] found id: ""
	I1114 15:59:02.904947  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:02.904994  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.909276  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:02.909350  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:02.944708  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:02.944778  876668 cri.go:89] found id: ""
	I1114 15:59:02.944789  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:02.944856  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.949260  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:02.949334  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:02.986830  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:02.986858  876668 cri.go:89] found id: ""
	I1114 15:59:02.986868  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:02.986928  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:02.991432  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:02.991511  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:03.028072  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.028101  876668 cri.go:89] found id: ""
	I1114 15:59:03.028113  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:03.028177  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.032678  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:03.032771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:03.070651  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.070671  876668 cri.go:89] found id: ""
	I1114 15:59:03.070679  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:03.070727  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.075127  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:03.075192  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:03.117191  876668 cri.go:89] found id: ""
	I1114 15:59:03.117221  876668 logs.go:284] 0 containers: []
	W1114 15:59:03.117229  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:03.117235  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:03.117300  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:03.163227  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.163255  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:03.163260  876668 cri.go:89] found id: ""
	I1114 15:59:03.163269  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:03.163322  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.167410  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:03.171362  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:03.171389  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:03.330078  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:03.330113  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:03.372318  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:03.372349  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:03.414474  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:03.414506  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:03.471989  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:03.472025  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:03.516802  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:03.516834  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:03.532186  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:03.532218  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:03.987984  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:03.988029  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:04.045261  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:04.045305  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:04.095816  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:04.095853  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:04.148084  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:04.148132  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:04.200992  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:04.201039  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:04.239171  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:04.239207  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:03.695077  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.194941  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:04.339648  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.839045  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:08.841546  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:06.787847  876668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:06.808020  876668 api_server.go:72] duration metric: took 4m16.941929205s to wait for apiserver process to appear ...
	I1114 15:59:06.808052  876668 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:06.808087  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:06.808146  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:06.849716  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:06.849747  876668 cri.go:89] found id: ""
	I1114 15:59:06.849758  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:06.849816  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.854025  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:06.854093  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:06.894331  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:06.894361  876668 cri.go:89] found id: ""
	I1114 15:59:06.894371  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:06.894430  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.899047  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:06.899137  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:06.947156  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:06.947194  876668 cri.go:89] found id: ""
	I1114 15:59:06.947206  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:06.947279  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:06.952972  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:06.953045  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:06.997872  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:06.997899  876668 cri.go:89] found id: ""
	I1114 15:59:06.997910  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:06.997972  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.002282  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:07.002362  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:07.041689  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.041722  876668 cri.go:89] found id: ""
	I1114 15:59:07.041734  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:07.041800  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.045730  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:07.045797  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:07.091996  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.092021  876668 cri.go:89] found id: ""
	I1114 15:59:07.092032  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:07.092094  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.100690  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:07.100771  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:07.141635  876668 cri.go:89] found id: ""
	I1114 15:59:07.141670  876668 logs.go:284] 0 containers: []
	W1114 15:59:07.141681  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:07.141689  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:07.141750  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:07.184807  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:07.184839  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.184847  876668 cri.go:89] found id: ""
	I1114 15:59:07.184857  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:07.184920  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.189361  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:07.197666  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:07.197694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:07.243532  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:07.243568  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:07.284479  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:07.284520  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:07.326309  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:07.326341  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:07.794035  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:07.794077  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:07.836008  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:07.836050  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:07.886157  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:07.886192  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:07.930752  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:07.930795  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:07.983727  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:07.983765  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:08.024969  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:08.025000  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:08.079050  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:08.079090  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:08.093653  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:08.093691  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:08.228823  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:08.228864  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:08.196022  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.196145  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:12.196843  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:11.340269  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:13.840055  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:10.780836  876668 api_server.go:253] Checking apiserver healthz at https://192.168.61.196:8444/healthz ...
	I1114 15:59:10.793555  876668 api_server.go:279] https://192.168.61.196:8444/healthz returned 200:
	ok
	I1114 15:59:10.794839  876668 api_server.go:141] control plane version: v1.28.3
	I1114 15:59:10.794868  876668 api_server.go:131] duration metric: took 3.986808086s to wait for apiserver health ...
	I1114 15:59:10.794878  876668 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:10.794907  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1114 15:59:10.794989  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1114 15:59:10.842028  876668 cri.go:89] found id: "c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:10.842050  876668 cri.go:89] found id: ""
	I1114 15:59:10.842059  876668 logs.go:284] 1 containers: [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5]
	I1114 15:59:10.842113  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.846938  876668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1114 15:59:10.847030  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1114 15:59:10.893360  876668 cri.go:89] found id: "ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:10.893386  876668 cri.go:89] found id: ""
	I1114 15:59:10.893394  876668 logs.go:284] 1 containers: [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07]
	I1114 15:59:10.893443  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.899601  876668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1114 15:59:10.899669  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1114 15:59:10.949519  876668 cri.go:89] found id: "335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:10.949542  876668 cri.go:89] found id: ""
	I1114 15:59:10.949550  876668 logs.go:284] 1 containers: [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a]
	I1114 15:59:10.949602  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.953875  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1114 15:59:10.953936  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1114 15:59:10.994565  876668 cri.go:89] found id: "bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:10.994595  876668 cri.go:89] found id: ""
	I1114 15:59:10.994605  876668 logs.go:284] 1 containers: [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156]
	I1114 15:59:10.994659  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:10.999120  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1114 15:59:10.999187  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1114 15:59:11.039364  876668 cri.go:89] found id: "a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.039392  876668 cri.go:89] found id: ""
	I1114 15:59:11.039403  876668 logs.go:284] 1 containers: [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864]
	I1114 15:59:11.039509  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.044115  876668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1114 15:59:11.044174  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1114 15:59:11.088803  876668 cri.go:89] found id: "96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.088835  876668 cri.go:89] found id: ""
	I1114 15:59:11.088846  876668 logs.go:284] 1 containers: [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3]
	I1114 15:59:11.088917  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.094005  876668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1114 15:59:11.094076  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1114 15:59:11.145247  876668 cri.go:89] found id: ""
	I1114 15:59:11.145276  876668 logs.go:284] 0 containers: []
	W1114 15:59:11.145285  876668 logs.go:286] No container was found matching "kindnet"
	I1114 15:59:11.145294  876668 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1114 15:59:11.145355  876668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1114 15:59:11.188916  876668 cri.go:89] found id: "19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.188950  876668 cri.go:89] found id: "251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.188957  876668 cri.go:89] found id: ""
	I1114 15:59:11.188967  876668 logs.go:284] 2 containers: [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8]
	I1114 15:59:11.189029  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.195578  876668 ssh_runner.go:195] Run: which crictl
	I1114 15:59:11.200146  876668 logs.go:123] Gathering logs for kube-scheduler [bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156] ...
	I1114 15:59:11.200174  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bde54fa8d8b9db385111c65b8b3690a2951af2b7c47305a4a054841a3ea16156"
	I1114 15:59:11.240413  876668 logs.go:123] Gathering logs for storage-provisioner [19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603] ...
	I1114 15:59:11.240458  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19e99b311805abfe78ebc918e152f506b2c3e4d8f7cc385aa93bc2f38604c603"
	I1114 15:59:11.290614  876668 logs.go:123] Gathering logs for CRI-O ...
	I1114 15:59:11.290648  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1114 15:59:11.638700  876668 logs.go:123] Gathering logs for dmesg ...
	I1114 15:59:11.638743  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1114 15:59:11.654234  876668 logs.go:123] Gathering logs for kube-controller-manager [96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3] ...
	I1114 15:59:11.654267  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96d5f7a9c1434d67953a8526c382a90312315fcf560e9fcd4c421887803ca2f3"
	I1114 15:59:11.709147  876668 logs.go:123] Gathering logs for coredns [335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a] ...
	I1114 15:59:11.709184  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 335b691953328fffe1fa9be822a3753d879ff80ee9f285aca8aceec34279465a"
	I1114 15:59:11.751661  876668 logs.go:123] Gathering logs for kube-proxy [a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864] ...
	I1114 15:59:11.751701  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e10dc7650db1a5edd220406ce417952838aff1d18fec8f6c96889f96e95864"
	I1114 15:59:11.796993  876668 logs.go:123] Gathering logs for storage-provisioner [251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8] ...
	I1114 15:59:11.797041  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251b882e2626aded5db9a00ed3f19c4e64c24a34a31697148cfc5ed14a3deff8"
	I1114 15:59:11.841478  876668 logs.go:123] Gathering logs for describe nodes ...
	I1114 15:59:11.841510  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1114 15:59:11.972862  876668 logs.go:123] Gathering logs for etcd [ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07] ...
	I1114 15:59:11.972902  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab4ac318c279afccace04d0b03d35ff24994d869dc42fe88188f443a6896ce07"
	I1114 15:59:12.019217  876668 logs.go:123] Gathering logs for container status ...
	I1114 15:59:12.019260  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1114 15:59:12.073396  876668 logs.go:123] Gathering logs for kubelet ...
	I1114 15:59:12.073443  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1114 15:59:12.142653  876668 logs.go:123] Gathering logs for kube-apiserver [c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5] ...
	I1114 15:59:12.142694  876668 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ca3bf950b5956ebb76d605325d0f36812ba3bb4aa7a9e7741b4c2f33653dc5"
	I1114 15:59:14.704129  876668 system_pods.go:59] 8 kube-system pods found
	I1114 15:59:14.704159  876668 system_pods.go:61] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.704167  876668 system_pods.go:61] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.704173  876668 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.704179  876668 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.704184  876668 system_pods.go:61] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.704191  876668 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.704200  876668 system_pods.go:61] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.704207  876668 system_pods.go:61] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.704217  876668 system_pods.go:74] duration metric: took 3.909331461s to wait for pod list to return data ...
	I1114 15:59:14.704231  876668 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:14.706920  876668 default_sa.go:45] found service account: "default"
	I1114 15:59:14.706944  876668 default_sa.go:55] duration metric: took 2.702527ms for default service account to be created ...
	I1114 15:59:14.706954  876668 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:14.714049  876668 system_pods.go:86] 8 kube-system pods found
	I1114 15:59:14.714080  876668 system_pods.go:89] "coredns-5dd5756b68-b8szg" [ac852af7-15e4-4112-9dff-c76da29439af] Running
	I1114 15:59:14.714089  876668 system_pods.go:89] "etcd-default-k8s-diff-port-529430" [2a769ed0-ec7c-492e-a293-631b08566e03] Running
	I1114 15:59:14.714096  876668 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-529430" [8aad3b83-ab85-484a-8fe5-a690c23a6ce1] Running
	I1114 15:59:14.714101  876668 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-529430" [29151afb-5e0a-4b13-9a57-331312bdc25d] Running
	I1114 15:59:14.714106  876668 system_pods.go:89] "kube-proxy-zpchs" [53e58226-44f2-4482-a4f4-1628cbcad8f9] Running
	I1114 15:59:14.714113  876668 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-529430" [9c6d69b6-ebc1-4f2d-b115-c06d4d2370ba] Running
	I1114 15:59:14.714128  876668 system_pods.go:89] "metrics-server-57f55c9bc5-ss2ks" [73fc9292-8667-473e-b3ca-43c4ae9fbdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:14.714142  876668 system_pods.go:89] "storage-provisioner" [7934b414-9ec6-40dd-be45-6c6ab42dd75b] Running
	I1114 15:59:14.714152  876668 system_pods.go:126] duration metric: took 7.191238ms to wait for k8s-apps to be running ...
	I1114 15:59:14.714174  876668 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 15:59:14.714231  876668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:14.734987  876668 system_svc.go:56] duration metric: took 20.804278ms WaitForService to wait for kubelet.
	I1114 15:59:14.735015  876668 kubeadm.go:581] duration metric: took 4m24.868931304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 15:59:14.735038  876668 node_conditions.go:102] verifying NodePressure condition ...
	I1114 15:59:14.737844  876668 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 15:59:14.737868  876668 node_conditions.go:123] node cpu capacity is 2
	I1114 15:59:14.737878  876668 node_conditions.go:105] duration metric: took 2.834918ms to run NodePressure ...
	I1114 15:59:14.737889  876668 start.go:228] waiting for startup goroutines ...
	I1114 15:59:14.737895  876668 start.go:233] waiting for cluster config update ...
	I1114 15:59:14.737905  876668 start.go:242] writing updated cluster config ...
	I1114 15:59:14.738157  876668 ssh_runner.go:195] Run: rm -f paused
	I1114 15:59:14.791076  876668 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 15:59:14.793853  876668 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-529430" cluster and "default" namespace by default
	I1114 15:59:14.694842  876396 pod_ready.go:102] pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:15.887599  876396 pod_ready.go:81] duration metric: took 4m0.000892827s waiting for pod "metrics-server-74d5856cc6-q9hc5" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:15.887641  876396 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:15.887664  876396 pod_ready.go:38] duration metric: took 4m1.199797165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:15.887694  876396 kubeadm.go:640] restartCluster took 5m7.501574769s
	W1114 15:59:15.887782  876396 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:15.887859  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:16.340114  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:18.340157  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:20.901839  876396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.013944828s)
	I1114 15:59:20.901933  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:20.915929  876396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:20.928081  876396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:20.937656  876396 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:20.937756  876396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1114 15:59:20.998439  876396 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1114 15:59:20.998593  876396 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:21.145429  876396 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:21.145639  876396 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:21.145777  876396 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:21.387825  876396 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:21.388897  876396 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:21.396490  876396 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1114 15:59:21.518176  876396 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:21.520261  876396 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:21.520398  876396 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:21.520496  876396 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:21.520590  876396 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:21.520686  876396 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:21.520797  876396 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:21.520918  876396 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:21.521009  876396 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:21.521434  876396 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:21.521822  876396 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:21.522333  876396 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:21.522651  876396 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:21.522730  876396 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:21.707438  876396 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:21.890929  876396 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:22.058077  876396 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:22.234616  876396 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:22.235636  876396 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:22.237626  876396 out.go:204]   - Booting up control plane ...
	I1114 15:59:22.237743  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:22.241964  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:22.242976  876396 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:22.244745  876396 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:22.248349  876396 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:20.341685  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:22.838566  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:25.337887  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:27.341368  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.256998  876396 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005833 seconds
	I1114 15:59:32.257145  876396 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:32.272061  876396 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:32.797161  876396 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:32.797367  876396 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-842105 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 15:59:33.314721  876396 kubeadm.go:322] [bootstrap-token] Using token: 04dlot.9kpu87sb3ajm8dfs
	I1114 15:59:33.316454  876396 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:33.316628  876396 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:33.324455  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:33.328877  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:33.335460  876396 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:33.339307  876396 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:33.422742  876396 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:33.757796  876396 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:33.759150  876396 kubeadm.go:322] 
	I1114 15:59:33.759248  876396 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:33.759281  876396 kubeadm.go:322] 
	I1114 15:59:33.759442  876396 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:33.759459  876396 kubeadm.go:322] 
	I1114 15:59:33.759495  876396 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:33.759577  876396 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:33.759647  876396 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:33.759657  876396 kubeadm.go:322] 
	I1114 15:59:33.759726  876396 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:33.759828  876396 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:33.759922  876396 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:33.759931  876396 kubeadm.go:322] 
	I1114 15:59:33.760050  876396 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1114 15:59:33.760143  876396 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:33.760154  876396 kubeadm.go:322] 
	I1114 15:59:33.760239  876396 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760360  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:33.760397  876396 kubeadm.go:322]     --control-plane 	  
	I1114 15:59:33.760408  876396 kubeadm.go:322] 
	I1114 15:59:33.760517  876396 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:33.760527  876396 kubeadm.go:322] 
	I1114 15:59:33.760624  876396 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 04dlot.9kpu87sb3ajm8dfs \
	I1114 15:59:33.760781  876396 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:33.764918  876396 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:33.764993  876396 cni.go:84] Creating CNI manager for ""
	I1114 15:59:33.765010  876396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:33.767708  876396 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:29.839580  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:32.339612  876065 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace has status "Ready":"False"
	I1114 15:59:33.072424  876065 pod_ready.go:81] duration metric: took 4m0.000921839s waiting for pod "metrics-server-57f55c9bc5-6lg6h" in "kube-system" namespace to be "Ready" ...
	E1114 15:59:33.072553  876065 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1114 15:59:33.072606  876065 pod_ready.go:38] duration metric: took 4m10.602378093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:33.072664  876065 kubeadm.go:640] restartCluster took 4m30.632686786s
	W1114 15:59:33.072782  876065 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1114 15:59:33.073057  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1114 15:59:33.769398  876396 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:33.781327  876396 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:33.810672  876396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:33.810839  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:33.810927  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=old-k8s-version-842105 minikube.k8s.io/updated_at=2023_11_14T15_59_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.181391  876396 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:34.181528  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.301381  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:34.919870  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.419262  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:35.919637  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.419780  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:36.919453  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.420046  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:37.919605  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.419845  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:38.919474  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.419303  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:39.919616  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.419633  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:40.919220  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.419298  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:41.919396  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.420042  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:42.919886  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.419274  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:43.920217  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.419952  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:44.919511  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.419619  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:45.919762  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.420141  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:46.919676  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.261922  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.188828866s)
	I1114 15:59:47.262031  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:47.276268  876065 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 15:59:47.285701  876065 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 15:59:47.294481  876065 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 15:59:47.294540  876065 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1114 15:59:47.348856  876065 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 15:59:47.348959  876065 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 15:59:47.530233  876065 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 15:59:47.530413  876065 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 15:59:47.530581  876065 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 15:59:47.784516  876065 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 15:59:47.420108  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:47.920005  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.419707  876396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:48.527158  876396 kubeadm.go:1081] duration metric: took 14.716377346s to wait for elevateKubeSystemPrivileges.
	I1114 15:59:48.527193  876396 kubeadm.go:406] StartCluster complete in 5m40.211957984s
	I1114 15:59:48.527213  876396 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.527323  876396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:59:48.529723  876396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 15:59:48.530058  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 15:59:48.530134  876396 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 15:59:48.530222  876396 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530248  876396 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-842105"
	W1114 15:59:48.530257  876396 addons.go:240] addon storage-provisioner should already be in state true
	I1114 15:59:48.530256  876396 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530285  876396 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-842105"
	W1114 15:59:48.530297  876396 addons.go:240] addon metrics-server should already be in state true
	I1114 15:59:48.530321  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530342  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.530354  876396 config.go:182] Loaded profile config "old-k8s-version-842105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1114 15:59:48.530429  876396 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-842105"
	I1114 15:59:48.530457  876396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-842105"
	I1114 15:59:48.530764  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530793  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530805  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.530795  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530818  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.530822  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.549568  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1114 15:59:48.549642  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I1114 15:59:48.550081  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550240  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.550734  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550755  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.550866  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.550887  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.551164  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551425  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.551622  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.551766  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.551813  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.552539  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1114 15:59:48.553028  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.554044  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.554063  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.554522  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.555069  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555106  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.555404  876396 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-842105"
	W1114 15:59:48.555470  876396 addons.go:240] addon default-storageclass should already be in state true
	I1114 15:59:48.555516  876396 host.go:66] Checking if "old-k8s-version-842105" exists ...
	I1114 15:59:48.555924  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.555961  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.576876  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I1114 15:59:48.576912  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I1114 15:59:48.576878  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1114 15:59:48.577223  876396 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-842105" context rescaled to 1 replicas
	I1114 15:59:48.577266  876396 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 15:59:48.579711  876396 out.go:177] * Verifying Kubernetes components...
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577660  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.577672  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.581751  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:59:48.580402  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581791  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580422  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581852  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.580432  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.581919  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.582238  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582286  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582314  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.582439  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.582735  876396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:59:48.582751  876396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:59:48.583264  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.584865  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.586792  876396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 15:59:48.585415  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.588364  876396 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.588378  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 15:59:48.588398  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.592854  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.594307  876396 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 15:59:47.786524  876065 out.go:204]   - Generating certificates and keys ...
	I1114 15:59:47.786668  876065 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 15:59:47.786744  876065 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 15:59:47.786843  876065 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 15:59:47.786912  876065 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1114 15:59:47.787108  876065 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1114 15:59:47.787698  876065 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1114 15:59:47.788301  876065 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1114 15:59:47.788930  876065 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1114 15:59:47.789533  876065 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 15:59:47.790115  876065 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 15:59:47.790449  876065 kubeadm.go:322] [certs] Using the existing "sa" key
	I1114 15:59:47.790523  876065 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 15:59:47.975724  876065 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 15:59:48.056071  876065 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 15:59:48.340177  876065 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 15:59:48.733230  876065 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 15:59:48.734350  876065 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 15:59:48.738369  876065 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 15:59:48.740013  876065 out.go:204]   - Booting up control plane ...
	I1114 15:59:48.740143  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 15:59:48.740271  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 15:59:48.743856  876065 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 15:59:48.763450  876065 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 15:59:48.764688  876065 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 15:59:48.764768  876065 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 15:59:48.932286  876065 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 15:59:48.592918  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.593079  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.595739  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 15:59:48.595754  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 15:59:48.595776  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.595826  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.595852  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.596957  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.597212  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.599011  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599448  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.599710  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.599755  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.599975  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.600142  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.600304  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.607351  876396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I1114 15:59:48.607929  876396 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:59:48.608484  876396 main.go:141] libmachine: Using API Version  1
	I1114 15:59:48.608509  876396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:59:48.608998  876396 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:59:48.609237  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetState
	I1114 15:59:48.610958  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .DriverName
	I1114 15:59:48.611196  876396 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.611210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 15:59:48.611228  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHHostname
	I1114 15:59:48.613709  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614297  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:79:07", ip: ""} in network mk-old-k8s-version-842105: {Iface:virbr1 ExpiryTime:2023-11-14 16:44:12 +0000 UTC Type:0 Mac:52:54:00:d4:79:07 Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:old-k8s-version-842105 Clientid:01:52:54:00:d4:79:07}
	I1114 15:59:48.614322  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | domain old-k8s-version-842105 has defined IP address 192.168.72.151 and MAC address 52:54:00:d4:79:07 in network mk-old-k8s-version-842105
	I1114 15:59:48.614366  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHPort
	I1114 15:59:48.614539  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHKeyPath
	I1114 15:59:48.614631  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .GetSSHUsername
	I1114 15:59:48.614711  876396 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/old-k8s-version-842105/id_rsa Username:docker}
	I1114 15:59:48.708399  876396 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.708481  876396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 15:59:48.715087  876396 node_ready.go:49] node "old-k8s-version-842105" has status "Ready":"True"
	I1114 15:59:48.715111  876396 node_ready.go:38] duration metric: took 6.675707ms waiting for node "old-k8s-version-842105" to be "Ready" ...
	I1114 15:59:48.715124  876396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718748  876396 pod_ready.go:38] duration metric: took 3.605786ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 15:59:48.718790  876396 api_server.go:52] waiting for apiserver process to appear ...
	I1114 15:59:48.718857  876396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:59:48.750191  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 15:59:48.773186  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 15:59:48.773210  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 15:59:48.788782  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 15:59:48.847057  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 15:59:48.847090  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 15:59:48.905401  876396 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:48.905442  876396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 15:59:48.986582  876396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 15:59:49.606449  876396 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1114 15:59:49.606451  876396 api_server.go:72] duration metric: took 1.029145444s to wait for apiserver process to appear ...
	I1114 15:59:49.606506  876396 api_server.go:88] waiting for apiserver healthz status ...
	I1114 15:59:49.606530  876396 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I1114 15:59:49.709702  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.709732  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.710100  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.710130  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.710144  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.710153  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.711953  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.711985  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.711994  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.755976  876396 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I1114 15:59:49.756696  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:49.756719  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:49.757036  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:49.757103  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:49.757121  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:49.757390  876396 api_server.go:141] control plane version: v1.16.0
	I1114 15:59:49.757410  876396 api_server.go:131] duration metric: took 150.89717ms to wait for apiserver health ...
	I1114 15:59:49.757447  876396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 15:59:49.763460  876396 system_pods.go:59] 2 kube-system pods found
	I1114 15:59:49.763487  876396 system_pods.go:61] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.763497  876396 system_pods.go:61] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.763509  876396 system_pods.go:74] duration metric: took 6.051168ms to wait for pod list to return data ...
	I1114 15:59:49.763518  876396 default_sa.go:34] waiting for default service account to be created ...
	I1114 15:59:49.776313  876396 default_sa.go:45] found service account: "default"
	I1114 15:59:49.776341  876396 default_sa.go:55] duration metric: took 12.814566ms for default service account to be created ...
	I1114 15:59:49.776351  876396 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 15:59:49.782462  876396 system_pods.go:86] 2 kube-system pods found
	I1114 15:59:49.782502  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:49.782518  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:49.782544  876396 retry.go:31] will retry after 311.640315ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.157150  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.368304542s)
	I1114 15:59:50.157269  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157286  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.157688  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.157711  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.157730  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.157743  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.158219  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.158270  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.169219  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.169264  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.169275  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.169282  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending
	I1114 15:59:50.169304  876396 retry.go:31] will retry after 335.621385ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.357400  876396 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.370764048s)
	I1114 15:59:50.357474  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.357494  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.359782  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.359789  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.359811  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.359829  876396 main.go:141] libmachine: Making call to close driver server
	I1114 15:59:50.359840  876396 main.go:141] libmachine: (old-k8s-version-842105) Calling .Close
	I1114 15:59:50.360228  876396 main.go:141] libmachine: (old-k8s-version-842105) DBG | Closing plugin on server side
	I1114 15:59:50.360264  876396 main.go:141] libmachine: Successfully made call to close driver server
	I1114 15:59:50.360285  876396 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 15:59:50.360333  876396 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-842105"
	I1114 15:59:50.362545  876396 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1114 15:59:50.364302  876396 addons.go:502] enable addons completed in 1.834168315s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1114 15:59:50.616547  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.616597  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.616608  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.616623  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.616645  876396 retry.go:31] will retry after 349.737645ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:50.971245  876396 system_pods.go:86] 3 kube-system pods found
	I1114 15:59:50.971286  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:50.971298  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:50.971312  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:50.971333  876396 retry.go:31] will retry after 562.981893ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:51.541777  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:51.541822  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:51.541849  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 15:59:51.541862  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:51.541870  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 15:59:51.541892  876396 retry.go:31] will retry after 617.692214ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler
	I1114 15:59:52.166157  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.166192  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.166199  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.166207  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.166211  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.166227  876396 retry.go:31] will retry after 671.968353ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:52.844235  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:52.844269  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:52.844276  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:52.844285  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:52.844290  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:52.844309  876396 retry.go:31] will retry after 955.353451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:53.814593  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:53.814626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:53.814636  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:53.814651  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:53.814661  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:53.814680  876396 retry.go:31] will retry after 1.306938168s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:55.127401  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:55.127436  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:55.127445  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:55.127457  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:55.127465  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:55.127488  876396 retry.go:31] will retry after 1.627615182s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.759304  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:56.759339  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:56.759345  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:56.759353  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:56.759358  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:56.759373  876396 retry.go:31] will retry after 2.046606031s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:56.936792  876065 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004387 seconds
	I1114 15:59:56.936992  876065 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 15:59:56.965969  876065 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 15:59:57.504894  876065 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 15:59:57.505171  876065 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-490998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 15:59:58.021451  876065 kubeadm.go:322] [bootstrap-token] Using token: 3x3ma3.qtutj9fi1nmgzc3r
	I1114 15:59:58.023064  876065 out.go:204]   - Configuring RBAC rules ...
	I1114 15:59:58.023220  876065 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 15:59:58.028334  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 15:59:58.039638  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 15:59:58.043783  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 15:59:58.048814  876065 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 15:59:58.061419  876065 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 15:59:58.075996  876065 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 15:59:58.328245  876065 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 15:59:58.435170  876065 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 15:59:58.436684  876065 kubeadm.go:322] 
	I1114 15:59:58.436781  876065 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 15:59:58.436796  876065 kubeadm.go:322] 
	I1114 15:59:58.436889  876065 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 15:59:58.436932  876065 kubeadm.go:322] 
	I1114 15:59:58.436988  876065 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 15:59:58.437091  876065 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 15:59:58.437155  876065 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 15:59:58.437176  876065 kubeadm.go:322] 
	I1114 15:59:58.437231  876065 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 15:59:58.437239  876065 kubeadm.go:322] 
	I1114 15:59:58.437281  876065 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 15:59:58.437288  876065 kubeadm.go:322] 
	I1114 15:59:58.437353  876065 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 15:59:58.437449  876065 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 15:59:58.437564  876065 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 15:59:58.437574  876065 kubeadm.go:322] 
	I1114 15:59:58.437684  876065 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 15:59:58.437800  876065 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 15:59:58.437816  876065 kubeadm.go:322] 
	I1114 15:59:58.437937  876065 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438087  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 \
	I1114 15:59:58.438116  876065 kubeadm.go:322] 	--control-plane 
	I1114 15:59:58.438124  876065 kubeadm.go:322] 
	I1114 15:59:58.438194  876065 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 15:59:58.438202  876065 kubeadm.go:322] 
	I1114 15:59:58.438267  876065 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3x3ma3.qtutj9fi1nmgzc3r \
	I1114 15:59:58.438355  876065 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f269819edd1dfacd2ede9a335c11995b2b7a18d00b3e59b03f9e085c2b0fd825 
	I1114 15:59:58.442217  876065 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 15:59:58.442251  876065 cni.go:84] Creating CNI manager for ""
	I1114 15:59:58.442263  876065 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 15:59:58.444078  876065 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 15:59:58.445560  876065 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 15:59:58.467849  876065 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 15:59:58.501795  876065 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 15:59:58.501941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.501965  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa minikube.k8s.io/name=no-preload-490998 minikube.k8s.io/updated_at=2023_11_14T15_59_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.557314  876065 ops.go:34] apiserver oom_adj: -16
	I1114 15:59:58.891105  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:59.006867  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 15:59:58.811870  876396 system_pods.go:86] 4 kube-system pods found
	I1114 15:59:58.811905  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 15:59:58.811912  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 15:59:58.811920  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 15:59:58.811924  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 15:59:58.811939  876396 retry.go:31] will retry after 2.166453413s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:00.984597  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:00.984626  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:00.984632  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:00.984638  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:00.984643  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:00.984661  876396 retry.go:31] will retry after 2.339496963s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 15:59:59.620843  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.120941  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:00.621244  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.121507  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:01.621512  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.121367  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:02.621449  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.120920  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.620857  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:03.329034  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:03.329061  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:03.329067  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:03.329074  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:03.329078  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:03.329097  876396 retry.go:31] will retry after 3.593700907s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:06.929268  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:06.929308  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:06.929316  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:06.929327  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:06.929335  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:06.929357  876396 retry.go:31] will retry after 4.929780079s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:04.121245  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:04.620976  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.120894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:05.621609  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.121209  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:06.621322  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.121613  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:07.620968  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.121482  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:08.621166  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.121032  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:09.620894  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.120992  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:10.621306  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.121427  876065 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 16:00:11.299388  876065 kubeadm.go:1081] duration metric: took 12.79751335s to wait for elevateKubeSystemPrivileges.
	I1114 16:00:11.299429  876065 kubeadm.go:406] StartCluster complete in 5m8.910317864s
	I1114 16:00:11.299489  876065 settings.go:142] acquiring lock: {Name:mk1f5098908f9ccaec1520c4cf8fe52dd7d73625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.299594  876065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 16:00:11.301841  876065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/kubeconfig: {Name:mkf7ada9065961c7295407bcd5245c67177c7015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 16:00:11.302097  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 16:00:11.302144  876065 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 16:00:11.302251  876065 addons.go:69] Setting storage-provisioner=true in profile "no-preload-490998"
	I1114 16:00:11.302268  876065 addons.go:69] Setting default-storageclass=true in profile "no-preload-490998"
	I1114 16:00:11.302287  876065 addons.go:231] Setting addon storage-provisioner=true in "no-preload-490998"
	W1114 16:00:11.302301  876065 addons.go:240] addon storage-provisioner should already be in state true
	I1114 16:00:11.302296  876065 addons.go:69] Setting metrics-server=true in profile "no-preload-490998"
	I1114 16:00:11.302327  876065 config.go:182] Loaded profile config "no-preload-490998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 16:00:11.302346  876065 addons.go:231] Setting addon metrics-server=true in "no-preload-490998"
	W1114 16:00:11.302360  876065 addons.go:240] addon metrics-server should already be in state true
	I1114 16:00:11.302361  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302408  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.302287  876065 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-490998"
	I1114 16:00:11.302858  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302926  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.302942  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302956  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.302863  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.303043  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.323089  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I1114 16:00:11.323101  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1114 16:00:11.323750  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.323807  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.324339  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324362  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324554  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.324577  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.324806  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I1114 16:00:11.325059  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325120  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.325172  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.325617  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.325652  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326120  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.326138  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.326359  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.326398  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.326499  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.326665  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.330090  876065 addons.go:231] Setting addon default-storageclass=true in "no-preload-490998"
	W1114 16:00:11.330115  876065 addons.go:240] addon default-storageclass should already be in state true
	I1114 16:00:11.330144  876065 host.go:66] Checking if "no-preload-490998" exists ...
	I1114 16:00:11.330381  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.330415  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.347198  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I1114 16:00:11.347385  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I1114 16:00:11.347562  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I1114 16:00:11.347721  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347785  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.347897  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.348216  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348232  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348346  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348358  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.348366  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348370  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.348593  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348729  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348878  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.348947  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349143  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.349223  876065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 16:00:11.349270  876065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 16:00:11.351308  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.353786  876065 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 16:00:11.352409  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.355097  876065 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.355119  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 16:00:11.355141  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.356613  876065 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 16:00:11.357928  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 16:00:11.357949  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 16:00:11.357969  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.358548  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359421  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.359450  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.359652  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.359922  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.360221  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.360379  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.362075  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362508  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.362532  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.362831  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.363041  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.363234  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.363390  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.379820  876065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I1114 16:00:11.380297  876065 main.go:141] libmachine: () Calling .GetVersion
	I1114 16:00:11.380905  876065 main.go:141] libmachine: Using API Version  1
	I1114 16:00:11.380935  876065 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 16:00:11.381326  876065 main.go:141] libmachine: () Calling .GetMachineName
	I1114 16:00:11.381573  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetState
	I1114 16:00:11.383433  876065 main.go:141] libmachine: (no-preload-490998) Calling .DriverName
	I1114 16:00:11.383722  876065 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.383741  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 16:00:11.383762  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHHostname
	I1114 16:00:11.386432  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.386813  876065 main.go:141] libmachine: (no-preload-490998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:48:fe", ip: ""} in network mk-no-preload-490998: {Iface:virbr2 ExpiryTime:2023-11-14 16:44:44 +0000 UTC Type:0 Mac:52:54:00:78:48:fe Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:no-preload-490998 Clientid:01:52:54:00:78:48:fe}
	I1114 16:00:11.386845  876065 main.go:141] libmachine: (no-preload-490998) DBG | domain no-preload-490998 has defined IP address 192.168.50.251 and MAC address 52:54:00:78:48:fe in network mk-no-preload-490998
	I1114 16:00:11.387062  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHPort
	I1114 16:00:11.387311  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHKeyPath
	I1114 16:00:11.387490  876065 main.go:141] libmachine: (no-preload-490998) Calling .GetSSHUsername
	I1114 16:00:11.387661  876065 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/no-preload-490998/id_rsa Username:docker}
	I1114 16:00:11.450418  876065 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-490998" context rescaled to 1 replicas
	I1114 16:00:11.450472  876065 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 16:00:11.452499  876065 out.go:177] * Verifying Kubernetes components...
	I1114 16:00:11.864833  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:11.864867  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:11.864875  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:11.864884  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:11.864891  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:11.864918  876396 retry.go:31] will retry after 6.141765036s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:11.454141  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:11.560863  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 16:00:11.582400  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 16:00:11.582423  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 16:00:11.596910  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 16:00:11.626625  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 16:00:11.626652  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 16:00:11.634166  876065 node_ready.go:35] waiting up to 6m0s for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.634309  876065 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 16:00:11.706391  876065 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.706421  876065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 16:00:11.737914  876065 node_ready.go:49] node "no-preload-490998" has status "Ready":"True"
	I1114 16:00:11.737955  876065 node_ready.go:38] duration metric: took 103.74965ms waiting for node "no-preload-490998" to be "Ready" ...
	I1114 16:00:11.737969  876065 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:11.795522  876065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 16:00:11.910850  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:13.838426  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.277507449s)
	I1114 16:00:13.838488  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838481  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.241527225s)
	I1114 16:00:13.838530  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.838555  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838501  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.838599  876065 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.204200469s)
	I1114 16:00:13.838636  876065 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1114 16:00:13.838941  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.838992  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839001  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839008  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839016  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.839032  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.839047  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.839057  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.839066  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841298  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.841315  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841335  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.841398  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.841418  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855083  876065 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.059516605s)
	I1114 16:00:13.855146  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855169  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855524  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855572  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855588  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855600  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.855612  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.855921  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.855949  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.855961  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.855979  876065 addons.go:467] Verifying addon metrics-server=true in "no-preload-490998"
	I1114 16:00:13.864145  876065 main.go:141] libmachine: Making call to close driver server
	I1114 16:00:13.864168  876065 main.go:141] libmachine: (no-preload-490998) Calling .Close
	I1114 16:00:13.864444  876065 main.go:141] libmachine: (no-preload-490998) DBG | Closing plugin on server side
	I1114 16:00:13.864480  876065 main.go:141] libmachine: Successfully made call to close driver server
	I1114 16:00:13.864491  876065 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 16:00:13.867459  876065 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1114 16:00:13.868861  876065 addons.go:502] enable addons completed in 2.566733189s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1114 16:00:14.067240  876065 pod_ready.go:97] error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067289  876065 pod_ready.go:81] duration metric: took 2.15639988s waiting for pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace to be "Ready" ...
	E1114 16:00:14.067306  876065 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-55g9l" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-55g9l" not found
	I1114 16:00:14.067315  876065 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140385  876065 pod_ready.go:92] pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.140412  876065 pod_ready.go:81] duration metric: took 2.07308909s waiting for pod "coredns-5dd5756b68-khvq4" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.140422  876065 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145818  876065 pod_ready.go:92] pod "etcd-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.145837  876065 pod_ready.go:81] duration metric: took 5.409163ms waiting for pod "etcd-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.145845  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150850  876065 pod_ready.go:92] pod "kube-apiserver-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.150868  876065 pod_ready.go:81] duration metric: took 5.017013ms waiting for pod "kube-apiserver-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.150877  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155895  876065 pod_ready.go:92] pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.155919  876065 pod_ready.go:81] duration metric: took 5.034132ms waiting for pod "kube-controller-manager-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.155931  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254239  876065 pod_ready.go:92] pod "kube-proxy-9nc8j" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.254270  876065 pod_ready.go:81] duration metric: took 98.331009ms waiting for pod "kube-proxy-9nc8j" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.254282  876065 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653014  876065 pod_ready.go:92] pod "kube-scheduler-no-preload-490998" in "kube-system" namespace has status "Ready":"True"
	I1114 16:00:16.653041  876065 pod_ready.go:81] duration metric: took 398.751468ms waiting for pod "kube-scheduler-no-preload-490998" in "kube-system" namespace to be "Ready" ...
	I1114 16:00:16.653049  876065 pod_ready.go:38] duration metric: took 4.915065516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 16:00:16.653066  876065 api_server.go:52] waiting for apiserver process to appear ...
	I1114 16:00:16.653118  876065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 16:00:16.670396  876065 api_server.go:72] duration metric: took 5.219889322s to wait for apiserver process to appear ...
	I1114 16:00:16.670430  876065 api_server.go:88] waiting for apiserver healthz status ...
	I1114 16:00:16.670450  876065 api_server.go:253] Checking apiserver healthz at https://192.168.50.251:8443/healthz ...
	I1114 16:00:16.675936  876065 api_server.go:279] https://192.168.50.251:8443/healthz returned 200:
	ok
	I1114 16:00:16.677570  876065 api_server.go:141] control plane version: v1.28.3
	I1114 16:00:16.677592  876065 api_server.go:131] duration metric: took 7.155742ms to wait for apiserver health ...
	I1114 16:00:16.677601  876065 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 16:00:16.858468  876065 system_pods.go:59] 8 kube-system pods found
	I1114 16:00:16.858500  876065 system_pods.go:61] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:16.858505  876065 system_pods.go:61] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:16.858509  876065 system_pods.go:61] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:16.858514  876065 system_pods.go:61] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:16.858518  876065 system_pods.go:61] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:16.858522  876065 system_pods.go:61] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:16.858529  876065 system_pods.go:61] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:16.858534  876065 system_pods.go:61] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:16.858543  876065 system_pods.go:74] duration metric: took 180.935707ms to wait for pod list to return data ...
	I1114 16:00:16.858551  876065 default_sa.go:34] waiting for default service account to be created ...
	I1114 16:00:17.053423  876065 default_sa.go:45] found service account: "default"
	I1114 16:00:17.053478  876065 default_sa.go:55] duration metric: took 194.91891ms for default service account to be created ...
	I1114 16:00:17.053491  876065 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 16:00:17.256504  876065 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:17.256539  876065 system_pods.go:89] "coredns-5dd5756b68-khvq4" [c134d1c1-63e3-47a0-aa90-f8bf3ca66a3a] Running
	I1114 16:00:17.256547  876065 system_pods.go:89] "etcd-no-preload-490998" [80461598-992c-4af1-a7b2-91b04419a67a] Running
	I1114 16:00:17.256554  876065 system_pods.go:89] "kube-apiserver-no-preload-490998" [3d8c712b-0ad0-44bb-a50a-4b4f879bd5ae] Running
	I1114 16:00:17.256561  876065 system_pods.go:89] "kube-controller-manager-no-preload-490998" [ac08f4b8-b8de-4f12-a337-9adc33b5d64b] Running
	I1114 16:00:17.256567  876065 system_pods.go:89] "kube-proxy-9nc8j" [0d0395ac-2e00-4cfe-b9a4-f98fa63a9fc6] Running
	I1114 16:00:17.256572  876065 system_pods.go:89] "kube-scheduler-no-preload-490998" [d1e78584-826c-4ba9-8d8b-aa545993ad26] Running
	I1114 16:00:17.256582  876065 system_pods.go:89] "metrics-server-57f55c9bc5-cljst" [3e8d5772-4204-44cb-9e85-41081d8a6510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:17.256589  876065 system_pods.go:89] "storage-provisioner" [a23261de-849c-41b5-9e5f-7230461b67d8] Running
	I1114 16:00:17.256602  876065 system_pods.go:126] duration metric: took 203.104027ms to wait for k8s-apps to be running ...
	I1114 16:00:17.256615  876065 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:17.256682  876065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:17.273098  876065 system_svc.go:56] duration metric: took 16.455935ms WaitForService to wait for kubelet.
	I1114 16:00:17.273135  876065 kubeadm.go:581] duration metric: took 5.822636312s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:17.273162  876065 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:17.453601  876065 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:17.453635  876065 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:17.453675  876065 node_conditions.go:105] duration metric: took 180.505934ms to run NodePressure ...
	I1114 16:00:17.453692  876065 start.go:228] waiting for startup goroutines ...
	I1114 16:00:17.453706  876065 start.go:233] waiting for cluster config update ...
	I1114 16:00:17.453748  876065 start.go:242] writing updated cluster config ...
	I1114 16:00:17.454022  876065 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:17.505999  876065 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 16:00:17.509514  876065 out.go:177] * Done! kubectl is now configured to use "no-preload-490998" cluster and "default" namespace by default
	I1114 16:00:18.012940  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:18.012980  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:18.012988  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:18.012998  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:18.013007  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:18.013032  876396 retry.go:31] will retry after 7.087138718s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:25.105773  876396 system_pods.go:86] 4 kube-system pods found
	I1114 16:00:25.105804  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:25.105809  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:25.105817  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:25.105822  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:25.105842  876396 retry.go:31] will retry after 8.539395127s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 16:00:33.651084  876396 system_pods.go:86] 6 kube-system pods found
	I1114 16:00:33.651116  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:33.651121  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:33.651125  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:33.651129  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:33.651136  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:33.651141  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:33.651159  876396 retry.go:31] will retry after 10.428154724s: missing components: etcd, kube-apiserver
	I1114 16:00:44.086463  876396 system_pods.go:86] 7 kube-system pods found
	I1114 16:00:44.086496  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:44.086501  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:44.086506  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:44.086511  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:44.086515  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:44.086522  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:44.086527  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:44.086546  876396 retry.go:31] will retry after 10.535877375s: missing components: kube-apiserver
	I1114 16:00:54.631194  876396 system_pods.go:86] 8 kube-system pods found
	I1114 16:00:54.631230  876396 system_pods.go:89] "coredns-5644d7b6d9-8855d" [76d136f9-de29-41cf-8df1-fdcbedcc30e6] Running
	I1114 16:00:54.631237  876396 system_pods.go:89] "etcd-old-k8s-version-842105" [2caa785f-8d7f-4aa3-9a1a-3ca332b04bcc] Running
	I1114 16:00:54.631244  876396 system_pods.go:89] "kube-apiserver-old-k8s-version-842105" [3035c074-63ca-4b23-a375-415210397d17] Running
	I1114 16:00:54.631252  876396 system_pods.go:89] "kube-controller-manager-old-k8s-version-842105" [fc8d94bd-091b-40a8-8162-4869ca3d3b65] Running
	I1114 16:00:54.631259  876396 system_pods.go:89] "kube-proxy-g86p9" [0afa19fc-9d8c-4ca9-9a51-2f7d13661718] Running
	I1114 16:00:54.631265  876396 system_pods.go:89] "kube-scheduler-old-k8s-version-842105" [dc2397b7-99d2-4d9f-9f19-22468ad9e1f8] Running
	I1114 16:00:54.631275  876396 system_pods.go:89] "metrics-server-74d5856cc6-8cxxt" [87326c72-11c7-4a38-9980-ca2ae63cf2e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 16:00:54.631291  876396 system_pods.go:89] "storage-provisioner" [a99f6a36-1296-455c-bb51-eaeb68fba6c5] Running
	I1114 16:00:54.631304  876396 system_pods.go:126] duration metric: took 1m4.854946282s to wait for k8s-apps to be running ...
	I1114 16:00:54.631317  876396 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 16:00:54.631470  876396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 16:00:54.648616  876396 system_svc.go:56] duration metric: took 17.286024ms WaitForService to wait for kubelet.
	I1114 16:00:54.648650  876396 kubeadm.go:581] duration metric: took 1m6.071350783s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 16:00:54.648677  876396 node_conditions.go:102] verifying NodePressure condition ...
	I1114 16:00:54.652020  876396 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 16:00:54.652055  876396 node_conditions.go:123] node cpu capacity is 2
	I1114 16:00:54.652069  876396 node_conditions.go:105] duration metric: took 3.385579ms to run NodePressure ...
	I1114 16:00:54.652085  876396 start.go:228] waiting for startup goroutines ...
	I1114 16:00:54.652093  876396 start.go:233] waiting for cluster config update ...
	I1114 16:00:54.652106  876396 start.go:242] writing updated cluster config ...
	I1114 16:00:54.652418  876396 ssh_runner.go:195] Run: rm -f paused
	I1114 16:00:54.706394  876396 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1114 16:00:54.708374  876396 out.go:177] 
	W1114 16:00:54.709776  876396 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1114 16:00:54.711177  876396 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1114 16:00:54.712775  876396 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-842105" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-14 15:53:47 UTC, ends at Tue 2023-11-14 16:13:55 UTC. --
	Nov 14 16:13:54 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:54.995197196Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978434995178493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=1e42df3e-4775-489c-89f5-d9a636118660 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:13:54 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:54.996092356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ebcdadc2-c54c-4d22-8668-991175ef7b9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:54 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:54.996188748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ebcdadc2-c54c-4d22-8668-991175ef7b9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:54 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:54.996352398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382,PodSandboxId:ca10a67a4f78c654ce8b0ed74a4d3be2b88936c62ca8fcb278d572cdf8b873ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977591330594748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99f6a36-1296-455c-bb51-eaeb68fba6c5,},Annotations:map[string]string{io.kubernetes.container.hash: f5d640d2,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439,PodSandboxId:8264d893bf6a18de0d6ff816446cec8ed3e1aaa2bbc520a9abd2f31a31768c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699977591120429409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g86p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afa19fc-9d8c-4ca9-9a51-2f7d13661718,},Annotations:map[string]string{io.kubernetes.container.hash: cf7228c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e,PodSandboxId:07a302208d0d1810425825fcbe69c2fb455033b5877caccade7f7647e66129f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699977590249897630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8855d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d136f9-de29-41cf-8df1-fdcbedcc30e6,},Annotations:map[string]string{io.kubernetes.container.hash: daf40a87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482,PodSandboxId:a5cbf7428fdebf74c2cf379b86ed423af45bb0fcd7531fdbc284c2c35c764dbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699977565038251759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f38130cc1f016f57dfa3cf4c3bae58,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4a07c132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc,PodSandboxId:649683159c97987570092438bd834ab4022b7ec9db3f668dd54b72bd7b991ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699977564053580723,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d,PodSandboxId:861602a1268ed7d92311f2f94d2933377965991a964bdf4f81835586f6dd4779,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699977563726163838,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130c9db229ff9707d2469539a210852,},Annotations:map[string]string{io.kubern
etes.container.hash: cd420de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31,PodSandboxId:b05616eb29a666bf26433d7cc1078f4b26b2287dab4faf96fedfec9d8fde5942,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699977563684042593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ebcdadc2-c54c-4d22-8668-991175ef7b9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.034495392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f3ae0307-6779-4eb5-9e5d-cb4070bdf258 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.034586487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f3ae0307-6779-4eb5-9e5d-cb4070bdf258 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.036157670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3c7dab64-c692-4380-adeb-3fd3514a94af name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.036921224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978435036903206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=3c7dab64-c692-4380-adeb-3fd3514a94af name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.038524801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5765352e-3b47-4f46-b388-f2232f28b180 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.038602326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5765352e-3b47-4f46-b388-f2232f28b180 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.038838834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382,PodSandboxId:ca10a67a4f78c654ce8b0ed74a4d3be2b88936c62ca8fcb278d572cdf8b873ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977591330594748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99f6a36-1296-455c-bb51-eaeb68fba6c5,},Annotations:map[string]string{io.kubernetes.container.hash: f5d640d2,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439,PodSandboxId:8264d893bf6a18de0d6ff816446cec8ed3e1aaa2bbc520a9abd2f31a31768c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699977591120429409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g86p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afa19fc-9d8c-4ca9-9a51-2f7d13661718,},Annotations:map[string]string{io.kubernetes.container.hash: cf7228c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e,PodSandboxId:07a302208d0d1810425825fcbe69c2fb455033b5877caccade7f7647e66129f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699977590249897630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8855d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d136f9-de29-41cf-8df1-fdcbedcc30e6,},Annotations:map[string]string{io.kubernetes.container.hash: daf40a87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482,PodSandboxId:a5cbf7428fdebf74c2cf379b86ed423af45bb0fcd7531fdbc284c2c35c764dbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699977565038251759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f38130cc1f016f57dfa3cf4c3bae58,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4a07c132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc,PodSandboxId:649683159c97987570092438bd834ab4022b7ec9db3f668dd54b72bd7b991ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699977564053580723,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d,PodSandboxId:861602a1268ed7d92311f2f94d2933377965991a964bdf4f81835586f6dd4779,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699977563726163838,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130c9db229ff9707d2469539a210852,},Annotations:map[string]string{io.kubern
etes.container.hash: cd420de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31,PodSandboxId:b05616eb29a666bf26433d7cc1078f4b26b2287dab4faf96fedfec9d8fde5942,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699977563684042593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5765352e-3b47-4f46-b388-f2232f28b180 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.079700805Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51e01528-028d-4ba5-9680-7ad8ee89fdb1 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.079816987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51e01528-028d-4ba5-9680-7ad8ee89fdb1 name=/runtime.v1.RuntimeService/Version
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.081311869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=905fdb2c-63ee-43eb-b92d-af5b8eb88ac8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.082102524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978435082085999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=905fdb2c-63ee-43eb-b92d-af5b8eb88ac8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.083143604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d89b5b17-defc-48f0-a5ce-187a8e708cb6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.083200798Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d89b5b17-defc-48f0-a5ce-187a8e708cb6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.083411548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382,PodSandboxId:ca10a67a4f78c654ce8b0ed74a4d3be2b88936c62ca8fcb278d572cdf8b873ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977591330594748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99f6a36-1296-455c-bb51-eaeb68fba6c5,},Annotations:map[string]string{io.kubernetes.container.hash: f5d640d2,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439,PodSandboxId:8264d893bf6a18de0d6ff816446cec8ed3e1aaa2bbc520a9abd2f31a31768c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699977591120429409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g86p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afa19fc-9d8c-4ca9-9a51-2f7d13661718,},Annotations:map[string]string{io.kubernetes.container.hash: cf7228c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e,PodSandboxId:07a302208d0d1810425825fcbe69c2fb455033b5877caccade7f7647e66129f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699977590249897630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8855d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d136f9-de29-41cf-8df1-fdcbedcc30e6,},Annotations:map[string]string{io.kubernetes.container.hash: daf40a87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482,PodSandboxId:a5cbf7428fdebf74c2cf379b86ed423af45bb0fcd7531fdbc284c2c35c764dbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699977565038251759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f38130cc1f016f57dfa3cf4c3bae58,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4a07c132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc,PodSandboxId:649683159c97987570092438bd834ab4022b7ec9db3f668dd54b72bd7b991ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699977564053580723,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d,PodSandboxId:861602a1268ed7d92311f2f94d2933377965991a964bdf4f81835586f6dd4779,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699977563726163838,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130c9db229ff9707d2469539a210852,},Annotations:map[string]string{io.kubern
etes.container.hash: cd420de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31,PodSandboxId:b05616eb29a666bf26433d7cc1078f4b26b2287dab4faf96fedfec9d8fde5942,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699977563684042593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d89b5b17-defc-48f0-a5ce-187a8e708cb6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.124839706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3acf41f9-ffb5-45c4-bf25-2528f320fb7d name=/runtime.v1.RuntimeService/Version
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.124948046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3acf41f9-ffb5-45c4-bf25-2528f320fb7d name=/runtime.v1.RuntimeService/Version
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.126538474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3e1eab7e-0e54-416d-8455-4e94a6d4d398 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.127274701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699978435127252751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=3e1eab7e-0e54-416d-8455-4e94a6d4d398 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.128050722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=df6833cb-5f92-4d46-bd88-2908ee566afd name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.128157836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=df6833cb-5f92-4d46-bd88-2908ee566afd name=/runtime.v1.RuntimeService/ListContainers
	Nov 14 16:13:55 old-k8s-version-842105 crio[733]: time="2023-11-14 16:13:55.128447997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382,PodSandboxId:ca10a67a4f78c654ce8b0ed74a4d3be2b88936c62ca8fcb278d572cdf8b873ca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699977591330594748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a99f6a36-1296-455c-bb51-eaeb68fba6c5,},Annotations:map[string]string{io.kubernetes.container.hash: f5d640d2,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439,PodSandboxId:8264d893bf6a18de0d6ff816446cec8ed3e1aaa2bbc520a9abd2f31a31768c74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699977591120429409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g86p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afa19fc-9d8c-4ca9-9a51-2f7d13661718,},Annotations:map[string]string{io.kubernetes.container.hash: cf7228c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e,PodSandboxId:07a302208d0d1810425825fcbe69c2fb455033b5877caccade7f7647e66129f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699977590249897630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8855d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d136f9-de29-41cf-8df1-fdcbedcc30e6,},Annotations:map[string]string{io.kubernetes.container.hash: daf40a87,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482,PodSandboxId:a5cbf7428fdebf74c2cf379b86ed423af45bb0fcd7531fdbc284c2c35c764dbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699977565038251759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f38130cc1f016f57dfa3cf4c3bae58,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4a07c132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc,PodSandboxId:649683159c97987570092438bd834ab4022b7ec9db3f668dd54b72bd7b991ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699977564053580723,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d,PodSandboxId:861602a1268ed7d92311f2f94d2933377965991a964bdf4f81835586f6dd4779,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699977563726163838,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d130c9db229ff9707d2469539a210852,},Annotations:map[string]string{io.kubern
etes.container.hash: cd420de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31,PodSandboxId:b05616eb29a666bf26433d7cc1078f4b26b2287dab4faf96fedfec9d8fde5942,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699977563684042593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-842105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=df6833cb-5f92-4d46-bd88-2908ee566afd name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e4eb788285cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   ca10a67a4f78c       storage-provisioner
	797c248b411c1       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   14 minutes ago      Running             kube-proxy                0                   8264d893bf6a1       kube-proxy-g86p9
	c5e946600b40d       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   14 minutes ago      Running             coredns                   0                   07a302208d0d1       coredns-5644d7b6d9-8855d
	7c9486a8d7813       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   a5cbf7428fdeb       etcd-old-k8s-version-842105
	3cf9dfdbec257       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   649683159c979       kube-scheduler-old-k8s-version-842105
	bf630d3e93819       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            0                   861602a1268ed       kube-apiserver-old-k8s-version-842105
	19215628874ec       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   b05616eb29a66       kube-controller-manager-old-k8s-version-842105
	
	* 
	* ==> coredns [c5e946600b40d83ada81750a145d139d53d6a967cc6a92882e18d18fe6e3814e] <==
	* .:53
	2023-11-14T15:59:50.679Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-14T15:59:50.679Z [INFO] CoreDNS-1.6.2
	2023-11-14T15:59:50.679Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-14T16:00:16.986Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2023-11-14T16:00:16.996Z [INFO] 127.0.0.1:43774 - 49059 "HINFO IN 4264295416034289763.7366177133355996784. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010283424s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-842105
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-842105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78e88c1ed3dbb33f4e8e0a9f1609d339aca8b3fa
	                    minikube.k8s.io/name=old-k8s-version-842105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T15_59_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 15:59:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 16:13:29 +0000   Tue, 14 Nov 2023 15:59:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 16:13:29 +0000   Tue, 14 Nov 2023 15:59:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 16:13:29 +0000   Tue, 14 Nov 2023 15:59:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 16:13:29 +0000   Tue, 14 Nov 2023 15:59:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.151
	  Hostname:    old-k8s-version-842105
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 02cec9847ade4e5f882c0d8ba9945a51
	 System UUID:                02cec984-7ade-4e5f-882c-0d8ba9945a51
	 Boot ID:                    c641e42a-9e20-4877-8a02-69dac1e980b3
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-8855d                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                etcd-old-k8s-version-842105                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-apiserver-old-k8s-version-842105             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-controller-manager-old-k8s-version-842105    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-proxy-g86p9                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                kube-scheduler-old-k8s-version-842105             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                metrics-server-74d5856cc6-8cxxt                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-842105     Node old-k8s-version-842105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x7 over 14m)  kubelet, old-k8s-version-842105     Node old-k8s-version-842105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x8 over 14m)  kubelet, old-k8s-version-842105     Node old-k8s-version-842105 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kube-proxy, old-k8s-version-842105  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov14 15:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074304] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.516011] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.467833] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149007] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.411302] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.821500] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.102389] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.154631] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.110901] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +0.214191] systemd-fstab-generator[718]: Ignoring "noauto" for root device
	[Nov14 15:54] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.458820] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +15.996845] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.356906] hrtimer: interrupt took 10215455 ns
	[  +0.379719] kauditd_printk_skb: 5 callbacks suppressed
	[Nov14 15:59] systemd-fstab-generator[3223]: Ignoring "noauto" for root device
	[  +1.263041] kauditd_printk_skb: 8 callbacks suppressed
	[ +36.187593] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [7c9486a8d7813ffb5242cda8b1ab3437d59d56fd519ae7d30a5ee37121fc6482] <==
	* 2023-11-14 15:59:25.181065 I | raft: cec33aa8f0724833 became follower at term 0
	2023-11-14 15:59:25.181090 I | raft: newRaft cec33aa8f0724833 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-14 15:59:25.181109 I | raft: cec33aa8f0724833 became follower at term 1
	2023-11-14 15:59:25.190204 W | auth: simple token is not cryptographically signed
	2023-11-14 15:59:25.195821 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-14 15:59:25.197979 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-14 15:59:25.198123 I | embed: listening for metrics on http://192.168.72.151:2381
	2023-11-14 15:59:25.198379 I | etcdserver: cec33aa8f0724833 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-14 15:59:25.198987 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-14 15:59:25.199281 I | etcdserver/membership: added member cec33aa8f0724833 [https://192.168.72.151:2380] to cluster 31c137043c99215d
	2023-11-14 15:59:25.381606 I | raft: cec33aa8f0724833 is starting a new election at term 1
	2023-11-14 15:59:25.381790 I | raft: cec33aa8f0724833 became candidate at term 2
	2023-11-14 15:59:25.381818 I | raft: cec33aa8f0724833 received MsgVoteResp from cec33aa8f0724833 at term 2
	2023-11-14 15:59:25.381840 I | raft: cec33aa8f0724833 became leader at term 2
	2023-11-14 15:59:25.381857 I | raft: raft.node: cec33aa8f0724833 elected leader cec33aa8f0724833 at term 2
	2023-11-14 15:59:25.382351 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-14 15:59:25.382728 I | etcdserver: published {Name:old-k8s-version-842105 ClientURLs:[https://192.168.72.151:2379]} to cluster 31c137043c99215d
	2023-11-14 15:59:25.382892 I | embed: ready to serve client requests
	2023-11-14 15:59:25.383941 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-14 15:59:25.384024 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-14 15:59:25.384050 I | embed: ready to serve client requests
	2023-11-14 15:59:25.385102 I | embed: serving client requests on 192.168.72.151:2379
	2023-11-14 15:59:25.390914 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-14 16:09:25.516830 I | mvcc: store.index: compact 650
	2023-11-14 16:09:25.519750 I | mvcc: finished scheduled compaction at 650 (took 2.231208ms)
	
	* 
	* ==> kernel <==
	*  16:13:55 up 20 min,  0 users,  load average: 0.31, 0.32, 0.24
	Linux old-k8s-version-842105 5.10.57 #1 SMP Thu Nov 9 03:58:23 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bf630d3e93819da559f114cefb1547739579c6b22317c7916f6bb2c044b4044d] <==
	* I1114 16:05:29.754736       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:05:29.754866       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:05:29.754972       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:05:29.755012       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:07:29.755552       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:07:29.755731       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:07:29.755820       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:07:29.755833       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:09:29.757205       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:09:29.757584       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:09:29.757740       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:09:29.757775       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:10:29.758301       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:10:29.758693       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:10:29.758822       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:10:29.758857       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 16:12:29.759302       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 16:12:29.759462       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 16:12:29.759561       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 16:12:29.759577       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [19215628874ec9c6bb81193cb0a376e5631fcb35828e2ad9329bffab86020f31] <==
	* E1114 16:07:23.282836       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:07:48.843452       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:07:53.534905       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:08:20.845819       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:08:23.787155       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:08:52.848065       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:08:54.039845       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1114 16:09:24.291484       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:09:24.850142       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:09:54.543249       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:09:56.852297       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:10:24.795929       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:10:28.855025       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:10:55.049221       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:11:00.857539       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:11:25.301048       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:11:32.860796       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:11:55.553348       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:12:04.862555       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:12:25.805586       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:12:36.865098       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:12:56.058107       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:13:08.867202       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 16:13:26.310083       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 16:13:40.869130       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [797c248b411c1be9cc353a5fa29f1d0b2960eab030cfab36964d531f24005439] <==
	* W1114 15:59:51.397331       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1114 15:59:51.413250       1 node.go:135] Successfully retrieved node IP: 192.168.72.151
	I1114 15:59:51.413413       1 server_others.go:149] Using iptables Proxier.
	I1114 15:59:51.415005       1 server.go:529] Version: v1.16.0
	I1114 15:59:51.419594       1 config.go:313] Starting service config controller
	I1114 15:59:51.420190       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1114 15:59:51.420352       1 config.go:131] Starting endpoints config controller
	I1114 15:59:51.420367       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1114 15:59:51.520970       1 shared_informer.go:204] Caches are synced for service config 
	I1114 15:59:51.521309       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [3cf9dfdbec25720ba80b95b8f1c7b03276509e94590b7ab8be797d40a8a1f2cc] <==
	* W1114 15:59:28.757576       1 authentication.go:79] Authentication is disabled
	I1114 15:59:28.757750       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1114 15:59:28.759590       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1114 15:59:28.809845       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 15:59:28.809967       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 15:59:28.810039       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 15:59:28.810102       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 15:59:28.810166       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:59:28.810196       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 15:59:28.815733       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 15:59:28.815921       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 15:59:28.816015       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:28.816149       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 15:59:28.824882       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:29.811494       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 15:59:29.818048       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 15:59:29.819553       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 15:59:29.820520       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 15:59:29.827024       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 15:59:29.830465       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 15:59:29.831577       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 15:59:29.832464       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 15:59:29.835132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 15:59:29.836907       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 15:59:29.838101       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 15:53:47 UTC, ends at Tue 2023-11-14 16:13:55 UTC. --
	Nov 14 16:09:22 old-k8s-version-842105 kubelet[3241]: E1114 16:09:22.735981    3241 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 14 16:09:32 old-k8s-version-842105 kubelet[3241]: E1114 16:09:32.656311    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:09:44 old-k8s-version-842105 kubelet[3241]: E1114 16:09:44.656781    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:09:59 old-k8s-version-842105 kubelet[3241]: E1114 16:09:59.656013    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:10:10 old-k8s-version-842105 kubelet[3241]: E1114 16:10:10.657773    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:10:23 old-k8s-version-842105 kubelet[3241]: E1114 16:10:23.656452    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:10:38 old-k8s-version-842105 kubelet[3241]: E1114 16:10:38.675836    3241 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 14 16:10:38 old-k8s-version-842105 kubelet[3241]: E1114 16:10:38.675907    3241 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 14 16:10:38 old-k8s-version-842105 kubelet[3241]: E1114 16:10:38.675956    3241 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 14 16:10:38 old-k8s-version-842105 kubelet[3241]: E1114 16:10:38.675981    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 14 16:10:52 old-k8s-version-842105 kubelet[3241]: E1114 16:10:52.658713    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:11:03 old-k8s-version-842105 kubelet[3241]: E1114 16:11:03.656025    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:11:18 old-k8s-version-842105 kubelet[3241]: E1114 16:11:18.656826    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:11:29 old-k8s-version-842105 kubelet[3241]: E1114 16:11:29.656862    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:11:40 old-k8s-version-842105 kubelet[3241]: E1114 16:11:40.656381    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:11:54 old-k8s-version-842105 kubelet[3241]: E1114 16:11:54.656405    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:12:07 old-k8s-version-842105 kubelet[3241]: E1114 16:12:07.657121    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:12:19 old-k8s-version-842105 kubelet[3241]: E1114 16:12:19.656739    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:12:34 old-k8s-version-842105 kubelet[3241]: E1114 16:12:34.656098    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:12:45 old-k8s-version-842105 kubelet[3241]: E1114 16:12:45.656595    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:12:59 old-k8s-version-842105 kubelet[3241]: E1114 16:12:59.656050    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:13:14 old-k8s-version-842105 kubelet[3241]: E1114 16:13:14.657226    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:13:27 old-k8s-version-842105 kubelet[3241]: E1114 16:13:27.656531    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:13:39 old-k8s-version-842105 kubelet[3241]: E1114 16:13:39.656704    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 16:13:51 old-k8s-version-842105 kubelet[3241]: E1114 16:13:51.656361    3241 pod_workers.go:191] Error syncing pod 87326c72-11c7-4a38-9980-ca2ae63cf2e6 ("metrics-server-74d5856cc6-8cxxt_kube-system(87326c72-11c7-4a38-9980-ca2ae63cf2e6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [6e4eb788285cfa7a236710229413e8dddf1329ebc075cb53e0feecf405a0c382] <==
	* I1114 15:59:51.505425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 15:59:51.516585       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 15:59:51.516917       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 15:59:51.529153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 15:59:51.529367       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842105_dc661fd8-34fe-46bb-bc2d-b1a1df28b409!
	I1114 15:59:51.530556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"401edbd4-27d4-4297-b70e-a42b51e34980", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-842105_dc661fd8-34fe-46bb-bc2d-b1a1df28b409 became leader
	I1114 15:59:51.630807       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842105_dc661fd8-34fe-46bb-bc2d-b1a1df28b409!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-842105 -n old-k8s-version-842105
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-842105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-8cxxt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-842105 describe pod metrics-server-74d5856cc6-8cxxt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-842105 describe pod metrics-server-74d5856cc6-8cxxt: exit status 1 (69.631195ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-8cxxt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-842105 describe pod metrics-server-74d5856cc6-8cxxt: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (238.56s)

                                                
                                    

Test pass (228/292)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.12
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 5.1
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.6
20 TestOffline 109.41
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 145.57
29 TestAddons/parallel/InspektorGadget 10.88
30 TestAddons/parallel/MetricsServer 7.12
31 TestAddons/parallel/HelmTiller 12.56
33 TestAddons/parallel/CSI 48.24
34 TestAddons/parallel/Headlamp 19.49
35 TestAddons/parallel/CloudSpanner 5.69
36 TestAddons/parallel/LocalPath 58.58
37 TestAddons/parallel/NvidiaDevicePlugin 5.88
40 TestAddons/serial/GCPAuth/Namespaces 0.13
42 TestCertOptions 47.55
43 TestCertExpiration 288.74
45 TestForceSystemdFlag 99.45
46 TestForceSystemdEnv 80.38
48 TestKVMDriverInstallOrUpdate 1.61
52 TestErrorSpam/setup 51.14
53 TestErrorSpam/start 0.4
54 TestErrorSpam/status 0.81
55 TestErrorSpam/pause 1.58
56 TestErrorSpam/unpause 1.75
57 TestErrorSpam/stop 2.27
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 99.04
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 47.28
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.16
69 TestFunctional/serial/CacheCmd/cache/add_local 1.06
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
77 TestFunctional/serial/ExtraConfig 35.6
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.46
80 TestFunctional/serial/LogsFileCmd 1.57
81 TestFunctional/serial/InvalidService 4.79
83 TestFunctional/parallel/ConfigCmd 0.47
84 TestFunctional/parallel/DashboardCmd 15.72
85 TestFunctional/parallel/DryRun 0.33
86 TestFunctional/parallel/InternationalLanguage 0.18
87 TestFunctional/parallel/StatusCmd 1.11
91 TestFunctional/parallel/ServiceCmdConnect 13.81
92 TestFunctional/parallel/AddonsCmd 0.22
93 TestFunctional/parallel/PersistentVolumeClaim 38.95
95 TestFunctional/parallel/SSHCmd 0.53
96 TestFunctional/parallel/CpCmd 1.11
97 TestFunctional/parallel/MySQL 25.39
98 TestFunctional/parallel/FileSync 0.25
99 TestFunctional/parallel/CertSync 1.62
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
107 TestFunctional/parallel/License 0.16
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
111 TestFunctional/parallel/MountCmd/any-port 23.25
121 TestFunctional/parallel/Version/short 0.07
122 TestFunctional/parallel/Version/components 0.95
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.4
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
127 TestFunctional/parallel/ImageCommands/ImageBuild 5.77
128 TestFunctional/parallel/ImageCommands/Setup 0.96
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 11.69
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.67
132 TestFunctional/parallel/MountCmd/specific-port 1.81
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.64
135 TestFunctional/parallel/ServiceCmd/DeployApp 13.32
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.14
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.4
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
140 TestFunctional/parallel/ProfileCmd/profile_list 0.3
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
142 TestFunctional/parallel/ServiceCmd/List 1.39
143 TestFunctional/parallel/ServiceCmd/JSONOutput 1.35
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
145 TestFunctional/parallel/ServiceCmd/Format 0.43
146 TestFunctional/parallel/ServiceCmd/URL 0.44
147 TestFunctional/delete_addon-resizer_images 0.07
148 TestFunctional/delete_my-image_image 0.02
149 TestFunctional/delete_minikube_cached_images 0.01
153 TestIngressAddonLegacy/StartLegacyK8sCluster 79.7
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.46
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
160 TestJSONOutput/start/Command 101.46
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.72
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.66
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 7.12
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.23
188 TestMainNoArgs 0.06
189 TestMinikubeProfile 100.22
192 TestMountStart/serial/StartWithMountFirst 27.9
193 TestMountStart/serial/VerifyMountFirst 0.41
194 TestMountStart/serial/StartWithMountSecond 27.59
195 TestMountStart/serial/VerifyMountSecond 0.41
196 TestMountStart/serial/DeleteFirst 0.72
197 TestMountStart/serial/VerifyMountPostDelete 0.41
198 TestMountStart/serial/Stop 1.21
199 TestMountStart/serial/RestartStopped 23.25
200 TestMountStart/serial/VerifyMountPostStop 0.41
203 TestMultiNode/serial/FreshStart2Nodes 108.75
204 TestMultiNode/serial/DeployApp2Nodes 5.46
206 TestMultiNode/serial/AddNode 41.96
207 TestMultiNode/serial/ProfileList 0.23
208 TestMultiNode/serial/CopyFile 7.95
209 TestMultiNode/serial/StopNode 3.02
210 TestMultiNode/serial/StartAfterStop 28.86
212 TestMultiNode/serial/DeleteNode 1.67
214 TestMultiNode/serial/RestartMultiNode 444.56
215 TestMultiNode/serial/ValidateNameConflict 49.52
222 TestScheduledStopUnix 121.07
228 TestKubernetesUpgrade 206.96
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
235 TestNoKubernetes/serial/StartWithK8s 112.63
240 TestNetworkPlugins/group/false 3.59
244 TestStoppedBinaryUpgrade/Setup 0.38
246 TestNoKubernetes/serial/StartWithStopK8s 9.31
247 TestNoKubernetes/serial/Start 28.59
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
249 TestNoKubernetes/serial/ProfileList 1.22
250 TestNoKubernetes/serial/Stop 1.23
251 TestNoKubernetes/serial/StartNoArgs 22.82
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
261 TestPause/serial/Start 115.43
262 TestNetworkPlugins/group/auto/Start 102.24
263 TestStoppedBinaryUpgrade/MinikubeLogs 0.43
264 TestNetworkPlugins/group/flannel/Start 75.61
266 TestNetworkPlugins/group/auto/KubeletFlags 0.28
267 TestNetworkPlugins/group/auto/NetCatPod 14.14
268 TestNetworkPlugins/group/auto/DNS 0.24
269 TestNetworkPlugins/group/auto/Localhost 0.2
270 TestNetworkPlugins/group/auto/HairPin 0.22
271 TestNetworkPlugins/group/enable-default-cni/Start 68.39
272 TestNetworkPlugins/group/bridge/Start 116.64
273 TestNetworkPlugins/group/flannel/ControllerPod 5.02
274 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
275 TestNetworkPlugins/group/flannel/NetCatPod 13.33
276 TestNetworkPlugins/group/flannel/DNS 0.19
277 TestNetworkPlugins/group/flannel/Localhost 0.17
278 TestNetworkPlugins/group/flannel/HairPin 0.16
279 TestNetworkPlugins/group/calico/Start 93.66
280 TestNetworkPlugins/group/kindnet/Start 94.77
281 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
282 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.35
283 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
284 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
285 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
286 TestNetworkPlugins/group/custom-flannel/Start 106.6
287 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
288 TestNetworkPlugins/group/bridge/NetCatPod 11.48
289 TestNetworkPlugins/group/bridge/DNS 0.17
290 TestNetworkPlugins/group/bridge/Localhost 0.16
291 TestNetworkPlugins/group/bridge/HairPin 0.15
292 TestNetworkPlugins/group/calico/ControllerPod 5.03
293 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
295 TestStartStop/group/old-k8s-version/serial/FirstStart 157.82
296 TestNetworkPlugins/group/calico/KubeletFlags 0.29
297 TestNetworkPlugins/group/calico/NetCatPod 13.47
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
299 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
300 TestNetworkPlugins/group/calico/DNS 0.24
301 TestNetworkPlugins/group/calico/Localhost 0.2
302 TestNetworkPlugins/group/calico/HairPin 0.17
303 TestNetworkPlugins/group/kindnet/DNS 0.19
304 TestNetworkPlugins/group/kindnet/Localhost 0.15
305 TestNetworkPlugins/group/kindnet/HairPin 0.15
307 TestStartStop/group/no-preload/serial/FirstStart 89.13
309 TestStartStop/group/embed-certs/serial/FirstStart 97
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
312 TestNetworkPlugins/group/custom-flannel/DNS 0.17
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 117.44
317 TestStartStop/group/no-preload/serial/DeployApp 9.53
318 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.35
320 TestStartStop/group/embed-certs/serial/DeployApp 9.5
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
323 TestStartStop/group/old-k8s-version/serial/DeployApp 7.45
324 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.43
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
331 TestStartStop/group/no-preload/serial/SecondStart 698.78
332 TestStartStop/group/embed-certs/serial/SecondStart 608.63
334 TestStartStop/group/old-k8s-version/serial/SecondStart 702.71
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 565.17
346 TestStartStop/group/newest-cni/serial/FirstStart 61.97
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.52
349 TestStartStop/group/newest-cni/serial/Stop 11.14
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
351 TestStartStop/group/newest-cni/serial/SecondStart 51.58
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
355 TestStartStop/group/newest-cni/serial/Pause 2.59
x
+
TestDownloadOnly/v1.16.0/json-events (10.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-430804 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-430804 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.117131587s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-430804
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-430804: exit status 85 (78.844648ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:38 UTC |          |
	|         | -p download-only-430804        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 14:38:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 14:38:52.577460  832223 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:38:52.577742  832223 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:38:52.577753  832223 out.go:309] Setting ErrFile to fd 2...
	I1114 14:38:52.577760  832223 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:38:52.577955  832223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	W1114 14:38:52.578145  832223 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17598-824991/.minikube/config/config.json: open /home/jenkins/minikube-integration/17598-824991/.minikube/config/config.json: no such file or directory
	I1114 14:38:52.578781  832223 out.go:303] Setting JSON to true
	I1114 14:38:52.580324  832223 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":40885,"bootTime":1699931848,"procs":898,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 14:38:52.580390  832223 start.go:138] virtualization: kvm guest
	I1114 14:38:52.583067  832223 out.go:97] [download-only-430804] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 14:38:52.584796  832223 out.go:169] MINIKUBE_LOCATION=17598
	W1114 14:38:52.583180  832223 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball: no such file or directory
	I1114 14:38:52.583226  832223 notify.go:220] Checking for updates...
	I1114 14:38:52.587822  832223 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:38:52.589303  832223 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:38:52.590756  832223 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:38:52.592152  832223 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1114 14:38:52.594757  832223 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1114 14:38:52.595033  832223 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:38:52.626644  832223 out.go:97] Using the kvm2 driver based on user configuration
	I1114 14:38:52.626680  832223 start.go:298] selected driver: kvm2
	I1114 14:38:52.626687  832223 start.go:902] validating driver "kvm2" against <nil>
	I1114 14:38:52.627111  832223 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:38:52.627212  832223 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 14:38:52.641651  832223 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 14:38:52.641745  832223 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 14:38:52.642250  832223 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1114 14:38:52.642390  832223 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1114 14:38:52.642444  832223 cni.go:84] Creating CNI manager for ""
	I1114 14:38:52.642461  832223 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:38:52.642479  832223 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1114 14:38:52.642488  832223 start_flags.go:323] config:
	{Name:download-only-430804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-430804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:38:52.642678  832223 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:38:52.644451  832223 out.go:97] Downloading VM boot image ...
	I1114 14:38:52.644506  832223 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/iso/amd64/minikube-v1.32.1-1699485311-17565-amd64.iso
	I1114 14:38:55.446893  832223 out.go:97] Starting control plane node download-only-430804 in cluster download-only-430804
	I1114 14:38:55.446928  832223 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 14:38:55.471094  832223 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1114 14:38:55.471129  832223 cache.go:56] Caching tarball of preloaded images
	I1114 14:38:55.471286  832223 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 14:38:55.473096  832223 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1114 14:38:55.473122  832223 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:38:55.495801  832223 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1114 14:38:58.635980  832223 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:38:58.636069  832223 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:38:59.543189  832223 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1114 14:38:59.543581  832223 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/download-only-430804/config.json ...
	I1114 14:38:59.543616  832223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/download-only-430804/config.json: {Name:mk285c47d2dca98bf7b3e6621f1f327f4cc4e08e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:38:59.543783  832223 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 14:38:59.543943  832223 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-430804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (5.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-430804 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-430804 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.098135185s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (5.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-430804
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-430804: exit status 85 (80.401104ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:38 UTC |          |
	|         | -p download-only-430804        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-430804 | jenkins | v1.32.0 | 14 Nov 23 14:39 UTC |          |
	|         | -p download-only-430804        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 14:39:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 14:39:02.781562  832280 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:39:02.781732  832280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:39:02.781743  832280 out.go:309] Setting ErrFile to fd 2...
	I1114 14:39:02.781751  832280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:39:02.781962  832280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	W1114 14:39:02.782102  832280 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17598-824991/.minikube/config/config.json: open /home/jenkins/minikube-integration/17598-824991/.minikube/config/config.json: no such file or directory
	I1114 14:39:02.782588  832280 out.go:303] Setting JSON to true
	I1114 14:39:02.784201  832280 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":40895,"bootTime":1699931848,"procs":894,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 14:39:02.784272  832280 start.go:138] virtualization: kvm guest
	I1114 14:39:02.786556  832280 out.go:97] [download-only-430804] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 14:39:02.788202  832280 out.go:169] MINIKUBE_LOCATION=17598
	I1114 14:39:02.786752  832280 notify.go:220] Checking for updates...
	I1114 14:39:02.791208  832280 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:39:02.792877  832280 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:39:02.794295  832280 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:39:02.795672  832280 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1114 14:39:02.798324  832280 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1114 14:39:02.798871  832280 config.go:182] Loaded profile config "download-only-430804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1114 14:39:02.798952  832280 start.go:810] api.Load failed for download-only-430804: filestore "download-only-430804": Docker machine "download-only-430804" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1114 14:39:02.799052  832280 driver.go:378] Setting default libvirt URI to qemu:///system
	W1114 14:39:02.799100  832280 start.go:810] api.Load failed for download-only-430804: filestore "download-only-430804": Docker machine "download-only-430804" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1114 14:39:02.831604  832280 out.go:97] Using the kvm2 driver based on existing profile
	I1114 14:39:02.831641  832280 start.go:298] selected driver: kvm2
	I1114 14:39:02.831648  832280 start.go:902] validating driver "kvm2" against &{Name:download-only-430804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-430804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:39:02.832089  832280 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:39:02.832169  832280 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17598-824991/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 14:39:02.846972  832280 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 14:39:02.847797  832280 cni.go:84] Creating CNI manager for ""
	I1114 14:39:02.847816  832280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1114 14:39:02.847832  832280 start_flags.go:323] config:
	{Name:download-only-430804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-430804 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:39:02.847977  832280 iso.go:125] acquiring lock: {Name:mk450778e1e8173ee0c207823f7c52a2b8554098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:39:02.849689  832280 out.go:97] Starting control plane node download-only-430804 in cluster download-only-430804
	I1114 14:39:02.849711  832280 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:39:02.872125  832280 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 14:39:02.872162  832280 cache.go:56] Caching tarball of preloaded images
	I1114 14:39:02.872312  832280 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:39:02.874269  832280 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1114 14:39:02.874288  832280 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:39:02.898700  832280 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1114 14:39:06.275684  832280 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:39:06.275821  832280 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17598-824991/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1114 14:39:07.211412  832280 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 14:39:07.211601  832280 profile.go:148] Saving config to /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/download-only-430804/config.json ...
	I1114 14:39:07.211855  832280 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:39:07.212076  832280 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17598-824991/.minikube/cache/linux/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-430804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-430804
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-886653 --alsologtostderr --binary-mirror http://127.0.0.1:44247 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-886653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-886653
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (109.41s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-142254 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-142254 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m48.323157229s)
helpers_test.go:175: Cleaning up "offline-crio-142254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-142254
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-142254: (1.087545798s)
--- PASS: TestOffline (109.41s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-317784
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-317784: exit status 85 (67.0583ms)

                                                
                                                
-- stdout --
	* Profile "addons-317784" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-317784"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-317784
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-317784: exit status 85 (66.269163ms)

                                                
                                                
-- stdout --
	* Profile "addons-317784" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-317784"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (145.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-317784 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-317784 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.572146907s)
--- PASS: TestAddons/Setup (145.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jhtw6" [afeb4122-4e14-4945-b56a-2c9b08c47a5f] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013070103s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-317784
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-317784: (5.870462096s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.99717ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-jkrcj" [cb043b53-5f93-4088-8ba6-93d4d706390a] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.019570937s
addons_test.go:414: (dbg) Run:  kubectl --context addons-317784 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-317784 addons disable metrics-server --alsologtostderr -v=1: (2.023998397s)
--- PASS: TestAddons/parallel/MetricsServer (7.12s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.56s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 24.809718ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-89dt8" [930fbb39-4b02-4205-8c93-f43026252d00] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.019007346s
addons_test.go:472: (dbg) Run:  kubectl --context addons-317784 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-317784 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.380834817s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p addons-317784 addons disable helm-tiller --alsologtostderr -v=1: (1.128494857s)
--- PASS: TestAddons/parallel/HelmTiller (12.56s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 6.633949ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-317784 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-317784 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5f804cf7-5ae0-48c4-ab40-76aeffd26877] Pending
helpers_test.go:344: "task-pv-pod" [5f804cf7-5ae0-48c4-ab40-76aeffd26877] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5f804cf7-5ae0-48c4-ab40-76aeffd26877] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.016989055s
addons_test.go:583: (dbg) Run:  kubectl --context addons-317784 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-317784 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-317784 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-317784 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-317784 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-317784 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-317784 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-317784 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [53c9b4c7-a3e2-49a4-af10-efc96caa257e] Pending
helpers_test.go:344: "task-pv-pod-restore" [53c9b4c7-a3e2-49a4-af10-efc96caa257e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [53c9b4c7-a3e2-49a4-af10-efc96caa257e] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.019947807s
addons_test.go:625: (dbg) Run:  kubectl --context addons-317784 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-317784 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-317784 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-317784 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.769483602s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-317784 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-317784 --alsologtostderr -v=1: (2.461559028s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-lx8bp" [f98e26b0-53b8-407a-9f98-712a0310b50a] Pending
helpers_test.go:344: "headlamp-777fd4b855-lx8bp" [f98e26b0-53b8-407a-9f98-712a0310b50a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-lx8bp" [f98e26b0-53b8-407a-9f98-712a0310b50a] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.024157315s
--- PASS: TestAddons/parallel/Headlamp (19.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-9phzq" [e170d39e-44a4-47f3-8d7a-c33c0ab80af7] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.015263351s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-317784
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-317784 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-317784 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [048ea378-0095-4054-8a33-0e00d927fe77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [048ea378-0095-4054-8a33-0e00d927fe77] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [048ea378-0095-4054-8a33-0e00d927fe77] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.012825014s
addons_test.go:890: (dbg) Run:  kubectl --context addons-317784 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 ssh "cat /opt/local-path-provisioner/pvc-a752c059-4770-47b4-8afa-af875685de10_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-317784 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-317784 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-317784 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-317784 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.381900216s)
--- PASS: TestAddons/parallel/LocalPath (58.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-q288v" [4201aa97-116f-4e49-ada3-ad15378da0e6] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.038242372s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-317784
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-317784 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-317784 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (47.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-700500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-700500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.026325016s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-700500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-700500 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-700500 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-700500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-700500
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-700500: (1.039432809s)
--- PASS: TestCertOptions (47.55s)

                                                
                                    
x
+
TestCertExpiration (288.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-556230 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-556230 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m6.174562042s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-556230 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-556230 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (41.722027159s)
helpers_test.go:175: Cleaning up "cert-expiration-556230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-556230
--- PASS: TestCertExpiration (288.74s)

                                                
                                    
x
+
TestForceSystemdFlag (99.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-205042 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-205042 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m38.197687649s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-205042 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-205042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-205042
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-205042: (1.025130623s)
--- PASS: TestForceSystemdFlag (99.45s)

                                                
                                    
x
+
TestForceSystemdEnv (80.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-249271 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-249271 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.526995158s)
helpers_test.go:175: Cleaning up "force-systemd-env-249271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-249271
--- PASS: TestForceSystemdEnv (80.38s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.61s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.61s)

                                                
                                    
x
+
TestErrorSpam/setup (51.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-991585 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-991585 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-991585 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-991585 --driver=kvm2  --container-runtime=crio: (51.13725066s)
--- PASS: TestErrorSpam/setup (51.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (2.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 stop: (2.099524153s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991585 --log_dir /tmp/nospam-991585 stop
--- PASS: TestErrorSpam/stop (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17598-824991/.minikube/files/etc/test/nested/copy/832211/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593453 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-593453 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.03750882s)
--- PASS: TestFunctional/serial/StartWithProxy (99.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593453 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-593453 --alsologtostderr -v=8: (47.279914639s)
functional_test.go:659: soft start took 47.280702792s for "functional-593453" cluster.
--- PASS: TestFunctional/serial/SoftStart (47.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-593453 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 cache add registry.k8s.io/pause:3.3: (1.053967489s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 cache add registry.k8s.io/pause:latest: (1.171212986s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-593453 /tmp/TestFunctionalserialCacheCmdcacheadd_local1637158967/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 cache add minikube-local-cache-test:functional-593453
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 cache delete minikube-local-cache-test:functional-593453
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-593453
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (240.47716ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 kubectl -- --context functional-593453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-593453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-593453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.599100415s)
functional_test.go:757: restart took 35.599266286s for "functional-593453" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-593453 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 logs: (1.459483222s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 logs --file /tmp/TestFunctionalserialLogsFileCmd2594220531/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 logs --file /tmp/TestFunctionalserialLogsFileCmd2594220531/001/logs.txt: (1.571102221s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-593453 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-593453
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-593453: exit status 115 (308.963222ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.39:30302 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-593453 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-593453 delete -f testdata/invalidsvc.yaml: (1.159323405s)
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 config get cpus: exit status 14 (91.248304ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 config get cpus: exit status 14 (69.635541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-593453 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-593453 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 840017: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593453 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-593453 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.969533ms)

                                                
                                                
-- stdout --
	* [functional-593453] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:51:53.764623  839547 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:51:53.764913  839547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:51:53.764923  839547 out.go:309] Setting ErrFile to fd 2...
	I1114 14:51:53.764928  839547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:51:53.765115  839547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 14:51:53.765722  839547 out.go:303] Setting JSON to false
	I1114 14:51:53.766762  839547 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":41666,"bootTime":1699931848,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 14:51:53.766830  839547 start.go:138] virtualization: kvm guest
	I1114 14:51:53.769363  839547 out.go:177] * [functional-593453] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 14:51:53.771004  839547 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 14:51:53.771052  839547 notify.go:220] Checking for updates...
	I1114 14:51:53.773871  839547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:51:53.775461  839547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:51:53.776989  839547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:51:53.778362  839547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 14:51:53.779746  839547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:51:53.781589  839547 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:51:53.781989  839547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:51:53.782072  839547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:51:53.798094  839547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I1114 14:51:53.798577  839547 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:51:53.799187  839547 main.go:141] libmachine: Using API Version  1
	I1114 14:51:53.799210  839547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:51:53.799642  839547 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:51:53.799826  839547 main.go:141] libmachine: (functional-593453) Calling .DriverName
	I1114 14:51:53.800106  839547 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:51:53.800414  839547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:51:53.800455  839547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:51:53.816436  839547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I1114 14:51:53.816957  839547 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:51:53.817484  839547 main.go:141] libmachine: Using API Version  1
	I1114 14:51:53.817508  839547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:51:53.817829  839547 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:51:53.818019  839547 main.go:141] libmachine: (functional-593453) Calling .DriverName
	I1114 14:51:53.853143  839547 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 14:51:53.854650  839547 start.go:298] selected driver: kvm2
	I1114 14:51:53.854674  839547 start.go:902] validating driver "kvm2" against &{Name:functional-593453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-593453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.39 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:51:53.854830  839547 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:51:53.857555  839547 out.go:177] 
	W1114 14:51:53.859070  839547 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1114 14:51:53.860406  839547 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593453 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593453 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-593453 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (175.548414ms)

                                                
                                                
-- stdout --
	* [functional-593453] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:51:53.085504  839362 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:51:53.085655  839362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:51:53.085667  839362 out.go:309] Setting ErrFile to fd 2...
	I1114 14:51:53.085672  839362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:51:53.085994  839362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 14:51:53.086547  839362 out.go:303] Setting JSON to false
	I1114 14:51:53.087786  839362 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":41665,"bootTime":1699931848,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 14:51:53.087856  839362 start.go:138] virtualization: kvm guest
	I1114 14:51:53.090213  839362 out.go:177] * [functional-593453] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1114 14:51:53.091775  839362 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 14:51:53.091780  839362 notify.go:220] Checking for updates...
	I1114 14:51:53.093347  839362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:51:53.094851  839362 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 14:51:53.096171  839362 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 14:51:53.097450  839362 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 14:51:53.098656  839362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:51:53.101208  839362 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:51:53.101904  839362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:51:53.101964  839362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:51:53.118386  839362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I1114 14:51:53.118801  839362 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:51:53.119308  839362 main.go:141] libmachine: Using API Version  1
	I1114 14:51:53.119331  839362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:51:53.119713  839362 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:51:53.119903  839362 main.go:141] libmachine: (functional-593453) Calling .DriverName
	I1114 14:51:53.120381  839362 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:51:53.120785  839362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 14:51:53.120826  839362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:51:53.137291  839362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I1114 14:51:53.137733  839362 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:51:53.138217  839362 main.go:141] libmachine: Using API Version  1
	I1114 14:51:53.138238  839362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:51:53.138617  839362 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:51:53.138793  839362 main.go:141] libmachine: (functional-593453) Calling .DriverName
	I1114 14:51:53.183439  839362 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1114 14:51:53.185031  839362 start.go:298] selected driver: kvm2
	I1114 14:51:53.185049  839362 start.go:902] validating driver "kvm2" against &{Name:functional-593453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17565/minikube-v1.32.1-1699485311-17565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-593453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.39 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:51:53.185184  839362 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:51:53.187742  839362 out.go:177] 
	W1114 14:51:53.189288  839362 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1114 14:51:53.190587  839362 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-593453 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-593453 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-dcvn4" [b68a532b-6525-4f82-a964-edc9298d4c78] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-dcvn4" [b68a532b-6525-4f82-a964-edc9298d4c78] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.025558265s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.39:31527
functional_test.go:1674: http://192.168.50.39:31527: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-dcvn4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.39:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.39:31527
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [125b0e4a-912b-4c66-892b-08d2c0054022] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01707184s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-593453 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-593453 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-593453 get pvc myclaim -o=json
E1114 14:51:34.576861  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:51:34.582968  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:51:34.593259  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:51:34.613573  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:51:34.654357  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:51:34.734711  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:51:34.895246  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:51:35.216325  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:51:35.856643  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-593453 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-593453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f05fb825-4575-4b7e-8b9b-3f732cd38658] Pending
E1114 14:51:37.136907  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [f05fb825-4575-4b7e-8b9b-3f732cd38658] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1114 14:51:39.697843  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [f05fb825-4575-4b7e-8b9b-3f732cd38658] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.015814614s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-593453 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-593453 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-593453 delete -f testdata/storage-provisioner/pod.yaml: (2.256511923s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-593453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4da10d0a-93b6-486c-8e6c-bc0d82383030] Pending
helpers_test.go:344: "sp-pod" [4da10d0a-93b6-486c-8e6c-bc0d82383030] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4da10d0a-93b6-486c-8e6c-bc0d82383030] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.047187698s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-593453 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh -n functional-593453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 cp functional-593453:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3789459788/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh -n functional-593453 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-593453 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-sh99z" [75182c5f-833b-4c4c-b19d-f7f85f349356] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-sh99z" [75182c5f-833b-4c4c-b19d-f7f85f349356] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.029808236s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-593453 exec mysql-859648c796-sh99z -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-593453 exec mysql-859648c796-sh99z -- mysql -ppassword -e "show databases;": exit status 1 (279.558586ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-593453 exec mysql-859648c796-sh99z -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-593453 exec mysql-859648c796-sh99z -- mysql -ppassword -e "show databases;": exit status 1 (213.635063ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-593453 exec mysql-859648c796-sh99z -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/832211/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo cat /etc/test/nested/copy/832211/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/832211.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo cat /etc/ssl/certs/832211.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/832211.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo cat /usr/share/ca-certificates/832211.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/8322112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo cat /etc/ssl/certs/8322112.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/8322112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo cat /usr/share/ca-certificates/8322112.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-593453 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 ssh "sudo systemctl is-active docker": exit status 1 (241.899991ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 ssh "sudo systemctl is-active containerd": exit status 1 (286.140858ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdany-port3958457769/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699973486867935037" to /tmp/TestFunctionalparallelMountCmdany-port3958457769/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699973486867935037" to /tmp/TestFunctionalparallelMountCmdany-port3958457769/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699973486867935037" to /tmp/TestFunctionalparallelMountCmdany-port3958457769/001/test-1699973486867935037
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.840965ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 14 14:51 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 14 14:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 14 14:51 test-1699973486867935037
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh cat /mount-9p/test-1699973486867935037
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-593453 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [49b69a69-8f2b-412a-aed0-90baa21668d3] Pending
helpers_test.go:344: "busybox-mount" [49b69a69-8f2b-412a-aed0-90baa21668d3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [49b69a69-8f2b-412a-aed0-90baa21668d3] Running
E1114 14:51:44.818651  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [49b69a69-8f2b-412a-aed0-90baa21668d3] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.073723444s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-593453 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdany-port3958457769/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593453 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-593453
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-593453
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593453 image ls --format short --alsologtostderr:
I1114 14:52:08.186891  840145 out.go:296] Setting OutFile to fd 1 ...
I1114 14:52:08.187219  840145 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:08.187234  840145 out.go:309] Setting ErrFile to fd 2...
I1114 14:52:08.187242  840145 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:08.187508  840145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
I1114 14:52:08.188374  840145 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:08.188546  840145 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:08.189112  840145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:08.189170  840145 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:08.204514  840145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
I1114 14:52:08.205073  840145 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:08.205705  840145 main.go:141] libmachine: Using API Version  1
I1114 14:52:08.205732  840145 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:08.206103  840145 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:08.206300  840145 main.go:141] libmachine: (functional-593453) Calling .GetState
I1114 14:52:08.208551  840145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:08.208608  840145 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:08.223216  840145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38197
I1114 14:52:08.223655  840145 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:08.224114  840145 main.go:141] libmachine: Using API Version  1
I1114 14:52:08.224137  840145 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:08.224519  840145 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:08.224719  840145 main.go:141] libmachine: (functional-593453) Calling .DriverName
I1114 14:52:08.224921  840145 ssh_runner.go:195] Run: systemctl --version
I1114 14:52:08.224947  840145 main.go:141] libmachine: (functional-593453) Calling .GetSSHHostname
I1114 14:52:08.227887  840145 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:08.228306  840145 main.go:141] libmachine: (functional-593453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:4a:4a", ip: ""} in network mk-functional-593453: {Iface:virbr1 ExpiryTime:2023-11-14 15:48:26 +0000 UTC Type:0 Mac:52:54:00:0a:4a:4a Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:functional-593453 Clientid:01:52:54:00:0a:4a:4a}
I1114 14:52:08.228365  840145 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined IP address 192.168.50.39 and MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:08.228482  840145 main.go:141] libmachine: (functional-593453) Calling .GetSSHPort
I1114 14:52:08.228764  840145 main.go:141] libmachine: (functional-593453) Calling .GetSSHKeyPath
I1114 14:52:08.228965  840145 main.go:141] libmachine: (functional-593453) Calling .GetSSHUsername
I1114 14:52:08.229155  840145 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/functional-593453/id_rsa Username:docker}
I1114 14:52:08.403636  840145 ssh_runner.go:195] Run: sudo crictl images --output json
I1114 14:52:08.511427  840145 main.go:141] libmachine: Making call to close driver server
I1114 14:52:08.511444  840145 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:08.511795  840145 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:08.511814  840145 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 14:52:08.511823  840145 main.go:141] libmachine: Making call to close driver server
I1114 14:52:08.511832  840145 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:08.511848  840145 main.go:141] libmachine: (functional-593453) DBG | Closing plugin on server side
I1114 14:52:08.512128  840145 main.go:141] libmachine: (functional-593453) DBG | Closing plugin on server side
I1114 14:52:08.512144  840145 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:08.512188  840145 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593453 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/google-containers/addon-resizer  | functional-593453  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | 547b3c3c15a96 | 520MB  |
| docker.io/library/nginx                 | latest             | c20060033e06f | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| localhost/minikube-local-cache-test     | functional-593453  | 933b130e7a634 | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593453 image ls --format table --alsologtostderr:
I1114 14:52:09.243216  840298 out.go:296] Setting OutFile to fd 1 ...
I1114 14:52:09.243521  840298 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:09.243533  840298 out.go:309] Setting ErrFile to fd 2...
I1114 14:52:09.243541  840298 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:09.243739  840298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
I1114 14:52:09.244389  840298 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:09.244525  840298 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:09.245019  840298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:09.245079  840298 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:09.260959  840298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40159
I1114 14:52:09.261450  840298 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:09.262068  840298 main.go:141] libmachine: Using API Version  1
I1114 14:52:09.262105  840298 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:09.262457  840298 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:09.262648  840298 main.go:141] libmachine: (functional-593453) Calling .GetState
I1114 14:52:09.264573  840298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:09.264625  840298 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:09.279138  840298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44901
I1114 14:52:09.279540  840298 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:09.280055  840298 main.go:141] libmachine: Using API Version  1
I1114 14:52:09.280077  840298 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:09.280444  840298 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:09.280663  840298 main.go:141] libmachine: (functional-593453) Calling .DriverName
I1114 14:52:09.280916  840298 ssh_runner.go:195] Run: systemctl --version
I1114 14:52:09.280954  840298 main.go:141] libmachine: (functional-593453) Calling .GetSSHHostname
I1114 14:52:09.284032  840298 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:09.284600  840298 main.go:141] libmachine: (functional-593453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:4a:4a", ip: ""} in network mk-functional-593453: {Iface:virbr1 ExpiryTime:2023-11-14 15:48:26 +0000 UTC Type:0 Mac:52:54:00:0a:4a:4a Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:functional-593453 Clientid:01:52:54:00:0a:4a:4a}
I1114 14:52:09.284641  840298 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined IP address 192.168.50.39 and MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:09.284762  840298 main.go:141] libmachine: (functional-593453) Calling .GetSSHPort
I1114 14:52:09.284946  840298 main.go:141] libmachine: (functional-593453) Calling .GetSSHKeyPath
I1114 14:52:09.285109  840298 main.go:141] libmachine: (functional-593453) Calling .GetSSHUsername
I1114 14:52:09.285241  840298 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/functional-593453/id_rsa Username:docker}
I1114 14:52:09.426061  840298 ssh_runner.go:195] Run: sudo crictl images --output json
I1114 14:52:09.520690  840298 main.go:141] libmachine: Making call to close driver server
I1114 14:52:09.520717  840298 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:09.521089  840298 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:09.521112  840298 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 14:52:09.521123  840298 main.go:141] libmachine: Making call to close driver server
I1114 14:52:09.521134  840298 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:09.521405  840298 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:09.521447  840298 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 14:52:09.521435  840298 main.go:141] libmachine: (functional-593453) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593453 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-593453"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/stora
ge-provisioner:v5"],"size":"31470524"},{"id":"933b130e7a6345a562bfa084ba8fe35f89825e6fb4694ba6a5096cdc9f4a5240","repoDigests":["localhost/minikube-local-cache-test@sha256:d0c0c70efc8b0ef899b2e01622c23a5d919f4a85913afeb0277d6fdd9185240b"],"repoTags":["localhost/minikube-local-cache-test:functional-593453"],"size":"3345"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":
["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647","repoDigests":["docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6","docker.io/library/nginx@sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{
"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8
s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6","repoDigests":["docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9","docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519576537"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed
6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"127165392"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3c
aa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593453 image ls --format json --alsologtostderr:
I1114 14:52:08.914487  840246 out.go:296] Setting OutFile to fd 1 ...
I1114 14:52:08.914661  840246 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:08.914673  840246 out.go:309] Setting ErrFile to fd 2...
I1114 14:52:08.914678  840246 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:08.914879  840246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
I1114 14:52:08.915515  840246 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:08.915675  840246 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:08.916315  840246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:08.916386  840246 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:08.932896  840246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39927
I1114 14:52:08.933429  840246 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:08.934093  840246 main.go:141] libmachine: Using API Version  1
I1114 14:52:08.934118  840246 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:08.934532  840246 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:08.934767  840246 main.go:141] libmachine: (functional-593453) Calling .GetState
I1114 14:52:08.936919  840246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:08.936975  840246 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:08.956875  840246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
I1114 14:52:08.957372  840246 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:08.957994  840246 main.go:141] libmachine: Using API Version  1
I1114 14:52:08.958020  840246 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:08.958417  840246 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:08.958648  840246 main.go:141] libmachine: (functional-593453) Calling .DriverName
I1114 14:52:08.958908  840246 ssh_runner.go:195] Run: systemctl --version
I1114 14:52:08.958962  840246 main.go:141] libmachine: (functional-593453) Calling .GetSSHHostname
I1114 14:52:08.963004  840246 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:08.963556  840246 main.go:141] libmachine: (functional-593453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:4a:4a", ip: ""} in network mk-functional-593453: {Iface:virbr1 ExpiryTime:2023-11-14 15:48:26 +0000 UTC Type:0 Mac:52:54:00:0a:4a:4a Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:functional-593453 Clientid:01:52:54:00:0a:4a:4a}
I1114 14:52:08.963606  840246 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined IP address 192.168.50.39 and MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:08.963672  840246 main.go:141] libmachine: (functional-593453) Calling .GetSSHPort
I1114 14:52:08.963850  840246 main.go:141] libmachine: (functional-593453) Calling .GetSSHKeyPath
I1114 14:52:08.963982  840246 main.go:141] libmachine: (functional-593453) Calling .GetSSHUsername
I1114 14:52:08.964205  840246 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/functional-593453/id_rsa Username:docker}
I1114 14:52:09.075660  840246 ssh_runner.go:195] Run: sudo crictl images --output json
I1114 14:52:09.170559  840246 main.go:141] libmachine: Making call to close driver server
I1114 14:52:09.170575  840246 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:09.170897  840246 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:09.170915  840246 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 14:52:09.170930  840246 main.go:141] libmachine: Making call to close driver server
I1114 14:52:09.170939  840246 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:09.171189  840246 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:09.171223  840246 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593453 image ls --format yaml --alsologtostderr:
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6
repoDigests:
- docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9
- docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d
repoTags:
- docker.io/library/mysql:5.7
size: "519576537"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 933b130e7a6345a562bfa084ba8fe35f89825e6fb4694ba6a5096cdc9f4a5240
repoDigests:
- localhost/minikube-local-cache-test@sha256:d0c0c70efc8b0ef899b2e01622c23a5d919f4a85913afeb0277d6fdd9185240b
repoTags:
- localhost/minikube-local-cache-test:functional-593453
size: "3345"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests:
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
- docker.io/library/nginx@sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-593453
size: "34114467"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593453 image ls --format yaml --alsologtostderr:
I1114 14:52:08.578053  840168 out.go:296] Setting OutFile to fd 1 ...
I1114 14:52:08.578212  840168 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:08.578224  840168 out.go:309] Setting ErrFile to fd 2...
I1114 14:52:08.578230  840168 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:08.578422  840168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
I1114 14:52:08.579020  840168 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:08.579127  840168 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:08.579746  840168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:08.579805  840168 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:08.595400  840168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
I1114 14:52:08.595935  840168 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:08.596559  840168 main.go:141] libmachine: Using API Version  1
I1114 14:52:08.596589  840168 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:08.596942  840168 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:08.597141  840168 main.go:141] libmachine: (functional-593453) Calling .GetState
I1114 14:52:08.599100  840168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:08.599159  840168 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:08.613598  840168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
I1114 14:52:08.614051  840168 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:08.614538  840168 main.go:141] libmachine: Using API Version  1
I1114 14:52:08.614564  840168 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:08.614956  840168 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:08.615193  840168 main.go:141] libmachine: (functional-593453) Calling .DriverName
I1114 14:52:08.615439  840168 ssh_runner.go:195] Run: systemctl --version
I1114 14:52:08.615467  840168 main.go:141] libmachine: (functional-593453) Calling .GetSSHHostname
I1114 14:52:08.618761  840168 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:08.619286  840168 main.go:141] libmachine: (functional-593453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:4a:4a", ip: ""} in network mk-functional-593453: {Iface:virbr1 ExpiryTime:2023-11-14 15:48:26 +0000 UTC Type:0 Mac:52:54:00:0a:4a:4a Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:functional-593453 Clientid:01:52:54:00:0a:4a:4a}
I1114 14:52:08.619325  840168 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined IP address 192.168.50.39 and MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:08.619449  840168 main.go:141] libmachine: (functional-593453) Calling .GetSSHPort
I1114 14:52:08.619645  840168 main.go:141] libmachine: (functional-593453) Calling .GetSSHKeyPath
I1114 14:52:08.619799  840168 main.go:141] libmachine: (functional-593453) Calling .GetSSHUsername
I1114 14:52:08.619930  840168 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/functional-593453/id_rsa Username:docker}
I1114 14:52:08.745821  840168 ssh_runner.go:195] Run: sudo crictl images --output json
I1114 14:52:08.844855  840168 main.go:141] libmachine: Making call to close driver server
I1114 14:52:08.844877  840168 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:08.845194  840168 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:08.845297  840168 main.go:141] libmachine: (functional-593453) DBG | Closing plugin on server side
I1114 14:52:08.845329  840168 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 14:52:08.845339  840168 main.go:141] libmachine: Making call to close driver server
I1114 14:52:08.845351  840168 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:08.845656  840168 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:08.845677  840168 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 ssh pgrep buildkitd: exit status 1 (264.118019ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image build -t localhost/my-image:functional-593453 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 image build -t localhost/my-image:functional-593453 testdata/build --alsologtostderr: (5.24277984s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593453 image build -t localhost/my-image:functional-593453 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8cdbbaab5ea
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-593453
--> b2ae678e7fa
Successfully tagged localhost/my-image:functional-593453
b2ae678e7fa92b8e450b7afc4d2f2d695424b59678072e69a46957ad92a4b591
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593453 image build -t localhost/my-image:functional-593453 testdata/build --alsologtostderr:
I1114 14:52:08.984038  840258 out.go:296] Setting OutFile to fd 1 ...
I1114 14:52:08.984371  840258 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:08.984383  840258 out.go:309] Setting ErrFile to fd 2...
I1114 14:52:08.984387  840258 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 14:52:08.984656  840258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
I1114 14:52:08.985432  840258 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:08.986181  840258 config.go:182] Loaded profile config "functional-593453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 14:52:08.986818  840258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:08.986899  840258 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:09.003058  840258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
I1114 14:52:09.003596  840258 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:09.004329  840258 main.go:141] libmachine: Using API Version  1
I1114 14:52:09.004360  840258 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:09.004782  840258 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:09.005038  840258 main.go:141] libmachine: (functional-593453) Calling .GetState
I1114 14:52:09.007341  840258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1114 14:52:09.007397  840258 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 14:52:09.022977  840258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
I1114 14:52:09.023405  840258 main.go:141] libmachine: () Calling .GetVersion
I1114 14:52:09.024021  840258 main.go:141] libmachine: Using API Version  1
I1114 14:52:09.024080  840258 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 14:52:09.024505  840258 main.go:141] libmachine: () Calling .GetMachineName
I1114 14:52:09.024793  840258 main.go:141] libmachine: (functional-593453) Calling .DriverName
I1114 14:52:09.025047  840258 ssh_runner.go:195] Run: systemctl --version
I1114 14:52:09.025076  840258 main.go:141] libmachine: (functional-593453) Calling .GetSSHHostname
I1114 14:52:09.028288  840258 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:09.028805  840258 main.go:141] libmachine: (functional-593453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:4a:4a", ip: ""} in network mk-functional-593453: {Iface:virbr1 ExpiryTime:2023-11-14 15:48:26 +0000 UTC Type:0 Mac:52:54:00:0a:4a:4a Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:functional-593453 Clientid:01:52:54:00:0a:4a:4a}
I1114 14:52:09.028835  840258 main.go:141] libmachine: (functional-593453) DBG | domain functional-593453 has defined IP address 192.168.50.39 and MAC address 52:54:00:0a:4a:4a in network mk-functional-593453
I1114 14:52:09.028949  840258 main.go:141] libmachine: (functional-593453) Calling .GetSSHPort
I1114 14:52:09.029136  840258 main.go:141] libmachine: (functional-593453) Calling .GetSSHKeyPath
I1114 14:52:09.029330  840258 main.go:141] libmachine: (functional-593453) Calling .GetSSHUsername
I1114 14:52:09.029511  840258 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/functional-593453/id_rsa Username:docker}
I1114 14:52:09.168270  840258 build_images.go:151] Building image from path: /tmp/build.4154121480.tar
I1114 14:52:09.168370  840258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1114 14:52:09.211651  840258 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4154121480.tar
I1114 14:52:09.224002  840258 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4154121480.tar: stat -c "%s %y" /var/lib/minikube/build/build.4154121480.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4154121480.tar': No such file or directory
I1114 14:52:09.224037  840258 ssh_runner.go:362] scp /tmp/build.4154121480.tar --> /var/lib/minikube/build/build.4154121480.tar (3072 bytes)
I1114 14:52:09.266516  840258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4154121480
I1114 14:52:09.280850  840258 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4154121480 -xf /var/lib/minikube/build/build.4154121480.tar
I1114 14:52:09.303751  840258 crio.go:297] Building image: /var/lib/minikube/build/build.4154121480
I1114 14:52:09.303829  840258 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-593453 /var/lib/minikube/build/build.4154121480 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1114 14:52:14.130704  840258 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-593453 /var/lib/minikube/build/build.4154121480 --cgroup-manager=cgroupfs: (4.826825118s)
I1114 14:52:14.130811  840258 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4154121480
I1114 14:52:14.143660  840258 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4154121480.tar
I1114 14:52:14.152675  840258 build_images.go:207] Built localhost/my-image:functional-593453 from /tmp/build.4154121480.tar
I1114 14:52:14.152735  840258 build_images.go:123] succeeded building to: functional-593453
I1114 14:52:14.152767  840258 build_images.go:124] failed building to: 
I1114 14:52:14.152806  840258 main.go:141] libmachine: Making call to close driver server
I1114 14:52:14.152827  840258 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:14.153212  840258 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:14.153244  840258 main.go:141] libmachine: (functional-593453) DBG | Closing plugin on server side
I1114 14:52:14.153250  840258 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 14:52:14.153262  840258 main.go:141] libmachine: Making call to close driver server
I1114 14:52:14.153272  840258 main.go:141] libmachine: (functional-593453) Calling .Close
I1114 14:52:14.153580  840258 main.go:141] libmachine: Successfully made call to close driver server
I1114 14:52:14.153605  840258 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 14:52:14.153611  840258 main.go:141] libmachine: (functional-593453) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls
E1114 14:52:15.539820  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
2023/11/14 14:52:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-593453
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image load --daemon gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 image load --daemon gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr: (11.356999294s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.086441905s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-593453
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image load --daemon gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 image load --daemon gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr: (4.305364786s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdspecific-port2261823482/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (237.351282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdspecific-port2261823482/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 ssh "sudo umount -f /mount-9p": exit status 1 (231.975431ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-593453 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdspecific-port2261823482/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608443185/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608443185/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608443185/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T" /mount1: exit status 1 (336.292825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-593453 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608443185/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608443185/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608443185/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image save gcr.io/google-containers/addon-resizer:functional-593453 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 image save gcr.io/google-containers/addon-resizer:functional-593453 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.637686291s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-593453 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-593453 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-2kqkt" [21d6802b-897a-4f56-9606-f096b18baf5c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-2kqkt" [21d6802b-897a-4f56-9606-f096b18baf5c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.035819775s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image rm gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr
E1114 14:51:55.059565  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.263042518s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-593453
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 image save --daemon gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 image save --daemon gcr.io/google-containers/addon-resizer:functional-593453 --alsologtostderr: (2.367188476s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-593453
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "241.82808ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "61.665733ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "231.827073ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "61.884083ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 service list: (1.386693881s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-593453 service list -o json: (1.350413999s)
functional_test.go:1493: Took "1.350530909s" to run "out/minikube-linux-amd64 -p functional-593453 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.39:32068
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-593453 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.39:32068
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-593453
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-593453
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-593453
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (79.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-944535 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1114 14:52:56.501074  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-944535 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.702866608s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (79.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-944535 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-944535 addons enable ingress --alsologtostderr -v=5: (12.45531036s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-944535 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (101.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-335522 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1114 14:56:48.102709  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:57:02.265960  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 14:57:08.583830  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:57:49.545953  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-335522 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.457401879s)
--- PASS: TestJSONOutput/start/Command (101.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-335522 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-335522 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-335522 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-335522 --output=json --user=testUser: (7.122317709s)
--- PASS: TestJSONOutput/stop/Command (7.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-029828 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-029828 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.183076ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0f320143-168a-4355-86ad-e866bf58728c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-029828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"03924f3a-cc0f-4ff6-8503-4dd215c1c48e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17598"}}
	{"specversion":"1.0","id":"5daabd45-72b0-4c33-b1ae-b2bfe8e41f78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0abf7dd1-fbec-403a-a149-3a2e1d6830bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig"}}
	{"specversion":"1.0","id":"1a648438-3e1b-4df1-a7b5-e4cdb76d65ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube"}}
	{"specversion":"1.0","id":"e94d05f2-f358-4b31-9ed5-bd7c9518b67d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0eb31c2e-dade-421a-92f7-7ca4c36ebd32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3b36aef7-6767-4803-a53d-57055d0e78f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-029828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-029828
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (100.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-238986 --driver=kvm2  --container-runtime=crio
E1114 14:58:52.668539  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:52.673888  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:52.684162  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:52.704387  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:52.744652  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:52.824996  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:52.985508  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:53.306083  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:53.947021  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:55.227522  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:58:57.788436  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:59:02.909009  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 14:59:11.469626  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 14:59:13.149972  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-238986 --driver=kvm2  --container-runtime=crio: (45.901573149s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-241695 --driver=kvm2  --container-runtime=crio
E1114 14:59:33.630629  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-241695 --driver=kvm2  --container-runtime=crio: (51.515487484s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-238986
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-241695
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-241695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-241695
helpers_test.go:175: Cleaning up "first-238986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-238986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-238986: (1.008139792s)
--- PASS: TestMinikubeProfile (100.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-265134 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1114 15:00:14.591707  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-265134 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.898228735s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-265134 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-265134 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-286482 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-286482 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.587418307s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-286482 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-286482 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-265134 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-286482 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-286482 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-286482
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-286482: (1.211140042s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-286482
E1114 15:01:27.623126  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-286482: (22.252274911s)
--- PASS: TestMountStart/serial/RestartStopped (23.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-286482 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-286482 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-627820 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1114 15:01:36.512014  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 15:01:55.309918  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-627820 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.315316879s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-627820 -- rollout status deployment/busybox: (3.308162325s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-nqqlc -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-rxmbm -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-nqqlc -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-rxmbm -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-nqqlc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-627820 -- exec busybox-5bc68d56bd-rxmbm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.46s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-627820 -v 3 --alsologtostderr
E1114 15:03:52.668514  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-627820 -v 3 --alsologtostderr: (41.346008195s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.96s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp testdata/cp-test.txt multinode-627820:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2006925696/001/cp-test_multinode-627820.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820:/home/docker/cp-test.txt multinode-627820-m02:/home/docker/cp-test_multinode-627820_multinode-627820-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m02 "sudo cat /home/docker/cp-test_multinode-627820_multinode-627820-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820:/home/docker/cp-test.txt multinode-627820-m03:/home/docker/cp-test_multinode-627820_multinode-627820-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m03 "sudo cat /home/docker/cp-test_multinode-627820_multinode-627820-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp testdata/cp-test.txt multinode-627820-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2006925696/001/cp-test_multinode-627820-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820-m02:/home/docker/cp-test.txt multinode-627820:/home/docker/cp-test_multinode-627820-m02_multinode-627820.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820 "sudo cat /home/docker/cp-test_multinode-627820-m02_multinode-627820.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820-m02:/home/docker/cp-test.txt multinode-627820-m03:/home/docker/cp-test_multinode-627820-m02_multinode-627820-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m03 "sudo cat /home/docker/cp-test_multinode-627820-m02_multinode-627820-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp testdata/cp-test.txt multinode-627820-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2006925696/001/cp-test_multinode-627820-m03.txt
E1114 15:04:20.352687  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820-m03:/home/docker/cp-test.txt multinode-627820:/home/docker/cp-test_multinode-627820-m03_multinode-627820.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820 "sudo cat /home/docker/cp-test_multinode-627820-m03_multinode-627820.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 cp multinode-627820-m03:/home/docker/cp-test.txt multinode-627820-m02:/home/docker/cp-test_multinode-627820-m03_multinode-627820-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 ssh -n multinode-627820-m02 "sudo cat /home/docker/cp-test_multinode-627820-m03_multinode-627820-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-627820 node stop m03: (2.0972123s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-627820 status: exit status 7 (462.046686ms)

                                                
                                                
-- stdout --
	multinode-627820
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-627820-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-627820-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-627820 status --alsologtostderr: exit status 7 (459.094419ms)

                                                
                                                
-- stdout --
	multinode-627820
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-627820-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-627820-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:04:24.816041  847232 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:04:24.816156  847232 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:04:24.816161  847232 out.go:309] Setting ErrFile to fd 2...
	I1114 15:04:24.816165  847232 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:04:24.816326  847232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:04:24.816505  847232 out.go:303] Setting JSON to false
	I1114 15:04:24.816544  847232 mustload.go:65] Loading cluster: multinode-627820
	I1114 15:04:24.816597  847232 notify.go:220] Checking for updates...
	I1114 15:04:24.816943  847232 config.go:182] Loaded profile config "multinode-627820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:04:24.816961  847232 status.go:255] checking status of multinode-627820 ...
	I1114 15:04:24.817304  847232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:04:24.817380  847232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:04:24.832788  847232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I1114 15:04:24.833212  847232 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:04:24.833921  847232 main.go:141] libmachine: Using API Version  1
	I1114 15:04:24.833945  847232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:04:24.834346  847232 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:04:24.834572  847232 main.go:141] libmachine: (multinode-627820) Calling .GetState
	I1114 15:04:24.836309  847232 status.go:330] multinode-627820 host status = "Running" (err=<nil>)
	I1114 15:04:24.836325  847232 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:04:24.836632  847232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:04:24.836684  847232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:04:24.850945  847232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1114 15:04:24.851325  847232 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:04:24.851837  847232 main.go:141] libmachine: Using API Version  1
	I1114 15:04:24.851871  847232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:04:24.852205  847232 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:04:24.852378  847232 main.go:141] libmachine: (multinode-627820) Calling .GetIP
	I1114 15:04:24.855023  847232 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:04:24.855427  847232 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:04:24.855452  847232 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:04:24.855657  847232 host.go:66] Checking if "multinode-627820" exists ...
	I1114 15:04:24.855939  847232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:04:24.855971  847232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:04:24.869773  847232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I1114 15:04:24.870130  847232 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:04:24.870505  847232 main.go:141] libmachine: Using API Version  1
	I1114 15:04:24.870526  847232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:04:24.870856  847232 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:04:24.871027  847232 main.go:141] libmachine: (multinode-627820) Calling .DriverName
	I1114 15:04:24.871191  847232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 15:04:24.871219  847232 main.go:141] libmachine: (multinode-627820) Calling .GetSSHHostname
	I1114 15:04:24.874143  847232 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:04:24.874575  847232 main.go:141] libmachine: (multinode-627820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:37:2e", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:01:50 +0000 UTC Type:0 Mac:52:54:00:c4:37:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-627820 Clientid:01:52:54:00:c4:37:2e}
	I1114 15:04:24.874604  847232 main.go:141] libmachine: (multinode-627820) DBG | domain multinode-627820 has defined IP address 192.168.39.63 and MAC address 52:54:00:c4:37:2e in network mk-multinode-627820
	I1114 15:04:24.874785  847232 main.go:141] libmachine: (multinode-627820) Calling .GetSSHPort
	I1114 15:04:24.874967  847232 main.go:141] libmachine: (multinode-627820) Calling .GetSSHKeyPath
	I1114 15:04:24.875110  847232 main.go:141] libmachine: (multinode-627820) Calling .GetSSHUsername
	I1114 15:04:24.875254  847232 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820/id_rsa Username:docker}
	I1114 15:04:24.964295  847232 ssh_runner.go:195] Run: systemctl --version
	I1114 15:04:24.970265  847232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:04:24.982938  847232 kubeconfig.go:92] found "multinode-627820" server: "https://192.168.39.63:8443"
	I1114 15:04:24.982970  847232 api_server.go:166] Checking apiserver status ...
	I1114 15:04:24.983015  847232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 15:04:24.993937  847232 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	I1114 15:04:25.003454  847232 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod618073575d26c84596a59c7ddac9e2b1/crio-38842b79258e49e8204516b8e7ff6e58f6b9de2880a21cc788829ebb75edb277"
	I1114 15:04:25.003536  847232 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod618073575d26c84596a59c7ddac9e2b1/crio-38842b79258e49e8204516b8e7ff6e58f6b9de2880a21cc788829ebb75edb277/freezer.state
	I1114 15:04:25.014095  847232 api_server.go:204] freezer state: "THAWED"
	I1114 15:04:25.014121  847232 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I1114 15:04:25.019101  847232 api_server.go:279] https://192.168.39.63:8443/healthz returned 200:
	ok
	I1114 15:04:25.019123  847232 status.go:421] multinode-627820 apiserver status = Running (err=<nil>)
	I1114 15:04:25.019133  847232 status.go:257] multinode-627820 status: &{Name:multinode-627820 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 15:04:25.019150  847232 status.go:255] checking status of multinode-627820-m02 ...
	I1114 15:04:25.019466  847232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:04:25.019509  847232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:04:25.034581  847232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I1114 15:04:25.035022  847232 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:04:25.035520  847232 main.go:141] libmachine: Using API Version  1
	I1114 15:04:25.035543  847232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:04:25.035856  847232 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:04:25.036041  847232 main.go:141] libmachine: (multinode-627820-m02) Calling .GetState
	I1114 15:04:25.037609  847232 status.go:330] multinode-627820-m02 host status = "Running" (err=<nil>)
	I1114 15:04:25.037639  847232 host.go:66] Checking if "multinode-627820-m02" exists ...
	I1114 15:04:25.037963  847232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:04:25.038001  847232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:04:25.053459  847232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1114 15:04:25.053869  847232 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:04:25.054360  847232 main.go:141] libmachine: Using API Version  1
	I1114 15:04:25.054388  847232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:04:25.054716  847232 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:04:25.054931  847232 main.go:141] libmachine: (multinode-627820-m02) Calling .GetIP
	I1114 15:04:25.058083  847232 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:04:25.058580  847232 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:04:25.058608  847232 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:04:25.058771  847232 host.go:66] Checking if "multinode-627820-m02" exists ...
	I1114 15:04:25.059060  847232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:04:25.059099  847232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:04:25.073134  847232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37133
	I1114 15:04:25.073495  847232 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:04:25.073926  847232 main.go:141] libmachine: Using API Version  1
	I1114 15:04:25.073958  847232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:04:25.074292  847232 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:04:25.074484  847232 main.go:141] libmachine: (multinode-627820-m02) Calling .DriverName
	I1114 15:04:25.074657  847232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 15:04:25.074685  847232 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHHostname
	I1114 15:04:25.077433  847232 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:04:25.077816  847232 main.go:141] libmachine: (multinode-627820-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:21:cd", ip: ""} in network mk-multinode-627820: {Iface:virbr1 ExpiryTime:2023-11-14 16:02:56 +0000 UTC Type:0 Mac:52:54:00:69:21:cd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-627820-m02 Clientid:01:52:54:00:69:21:cd}
	I1114 15:04:25.077859  847232 main.go:141] libmachine: (multinode-627820-m02) DBG | domain multinode-627820-m02 has defined IP address 192.168.39.38 and MAC address 52:54:00:69:21:cd in network mk-multinode-627820
	I1114 15:04:25.077977  847232 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHPort
	I1114 15:04:25.078159  847232 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHKeyPath
	I1114 15:04:25.078333  847232 main.go:141] libmachine: (multinode-627820-m02) Calling .GetSSHUsername
	I1114 15:04:25.078503  847232 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17598-824991/.minikube/machines/multinode-627820-m02/id_rsa Username:docker}
	I1114 15:04:25.176661  847232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 15:04:25.189234  847232 status.go:257] multinode-627820-m02 status: &{Name:multinode-627820-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1114 15:04:25.189269  847232 status.go:255] checking status of multinode-627820-m03 ...
	I1114 15:04:25.189580  847232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1114 15:04:25.189627  847232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 15:04:25.204389  847232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I1114 15:04:25.204910  847232 main.go:141] libmachine: () Calling .GetVersion
	I1114 15:04:25.205390  847232 main.go:141] libmachine: Using API Version  1
	I1114 15:04:25.205414  847232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 15:04:25.205775  847232 main.go:141] libmachine: () Calling .GetMachineName
	I1114 15:04:25.205964  847232 main.go:141] libmachine: (multinode-627820-m03) Calling .GetState
	I1114 15:04:25.207646  847232 status.go:330] multinode-627820-m03 host status = "Stopped" (err=<nil>)
	I1114 15:04:25.207700  847232 status.go:343] host is not running, skipping remaining checks
	I1114 15:04:25.207716  847232 status.go:257] multinode-627820-m03 status: &{Name:multinode-627820-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.02s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-627820 node start m03 --alsologtostderr: (28.208097662s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-627820 node delete m03: (1.081937816s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (444.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-627820 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1114 15:18:52.668855  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 15:21:27.622649  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:21:34.577148  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 15:23:52.668581  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 15:24:37.627531  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-627820 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m23.983522992s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-627820 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (444.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-627820
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-627820-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-627820-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (81.71972ms)

                                                
                                                
-- stdout --
	* [multinode-627820-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-627820-m02' is duplicated with machine name 'multinode-627820-m02' in profile 'multinode-627820'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-627820-m03 --driver=kvm2  --container-runtime=crio
E1114 15:26:27.623047  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:26:34.577748  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-627820-m03 --driver=kvm2  --container-runtime=crio: (48.333819223s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-627820
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-627820: exit status 80 (238.950055ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-627820
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-627820-m03 already exists in multinode-627820-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-627820-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.52s)

                                                
                                    
x
+
TestScheduledStopUnix (121.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-163469 --memory=2048 --driver=kvm2  --container-runtime=crio
E1114 15:31:55.713871  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-163469 --memory=2048 --driver=kvm2  --container-runtime=crio: (49.232580836s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-163469 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-163469 -n scheduled-stop-163469
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-163469 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-163469 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-163469 -n scheduled-stop-163469
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-163469
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-163469 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-163469
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-163469: exit status 7 (85.736098ms)

                                                
                                                
-- stdout --
	scheduled-stop-163469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-163469 -n scheduled-stop-163469
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-163469 -n scheduled-stop-163469: exit status 7 (77.626359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-163469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-163469
--- PASS: TestScheduledStopUnix (121.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (206.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-893852 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-893852 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.064127948s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-893852
E1114 15:36:27.621194  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-893852: (6.126093742s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-893852 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-893852 status --format={{.Host}}: exit status 7 (99.052872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-893852 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1114 15:36:34.576809  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-893852 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m16.369621759s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-893852 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-893852 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-893852 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (125.811349ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-893852] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-893852
	    minikube start -p kubernetes-upgrade-893852 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8938522 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-893852 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-893852 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-893852 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.97803205s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-893852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-893852
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-893852: (1.119336241s)
--- PASS: TestKubernetesUpgrade (206.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235907 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-235907 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (99.663915ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-235907] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (112.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235907 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-235907 --driver=kvm2  --container-runtime=crio: (1m52.3167417s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-235907 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (112.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-492851 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-492851 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.92575ms)

                                                
                                                
-- stdout --
	* [false-492851] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 15:33:50.586403  855599 out.go:296] Setting OutFile to fd 1 ...
	I1114 15:33:50.586550  855599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:33:50.586556  855599 out.go:309] Setting ErrFile to fd 2...
	I1114 15:33:50.586564  855599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 15:33:50.586775  855599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17598-824991/.minikube/bin
	I1114 15:33:50.587375  855599 out.go:303] Setting JSON to false
	I1114 15:33:50.588385  855599 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":44183,"bootTime":1699931848,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 15:33:50.588450  855599 start.go:138] virtualization: kvm guest
	I1114 15:33:50.590817  855599 out.go:177] * [false-492851] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 15:33:50.592872  855599 out.go:177]   - MINIKUBE_LOCATION=17598
	I1114 15:33:50.592855  855599 notify.go:220] Checking for updates...
	I1114 15:33:50.594374  855599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 15:33:50.596015  855599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17598-824991/kubeconfig
	I1114 15:33:50.597428  855599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17598-824991/.minikube
	I1114 15:33:50.599003  855599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 15:33:50.600423  855599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 15:33:50.602205  855599 config.go:182] Loaded profile config "NoKubernetes-235907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:33:50.602315  855599 config.go:182] Loaded profile config "force-systemd-env-249271": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:33:50.602408  855599 config.go:182] Loaded profile config "offline-crio-142254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 15:33:50.602497  855599 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 15:33:50.637375  855599 out.go:177] * Using the kvm2 driver based on user configuration
	I1114 15:33:50.638813  855599 start.go:298] selected driver: kvm2
	I1114 15:33:50.638830  855599 start.go:902] validating driver "kvm2" against <nil>
	I1114 15:33:50.638841  855599 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 15:33:50.641003  855599 out.go:177] 
	W1114 15:33:50.642566  855599 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1114 15:33:50.643916  855599 out.go:177] 

                                                
                                                
** /stderr **
E1114 15:33:52.669227  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-492851 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-492851" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-492851

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492851"

                                                
                                                
----------------------- debugLogs end: false-492851 [took: 3.314489109s] --------------------------------
helpers_test.go:175: Cleaning up "false-492851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-492851
--- PASS: TestNetworkPlugins/group/false (3.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235907 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-235907 --no-kubernetes --driver=kvm2  --container-runtime=crio: (7.344424199s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-235907 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-235907 status -o json: exit status 2 (308.925854ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-235907","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-235907
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-235907: (1.652482459s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235907 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-235907 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.589631692s)
--- PASS: TestNoKubernetes/serial/Start (28.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-235907 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-235907 "sudo systemctl is-active --quiet service kubelet": exit status 1 (297.323975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-235907
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-235907: (1.226805581s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-235907 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-235907 --driver=kvm2  --container-runtime=crio: (22.819409157s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-235907 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-235907 "sudo systemctl is-active --quiet service kubelet": exit status 1 (229.979168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (115.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-584924 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1114 15:38:52.669195  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-584924 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m55.433157084s)
--- PASS: TestPause/serial/Start (115.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.237607981s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-276452
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m15.611305801s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-492851 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-492851 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-492851 replace --force -f testdata/netcat-deployment.yaml: (1.21969777s)
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8f5r9" [46ed3989-c877-45b2-80b6-d671bfea5ec6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8f5r9" [46ed3989-c877-45b2-80b6-d671bfea5ec6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.046770776s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-492851 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1114 15:41:17.628289  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m8.392334081s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (116.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1114 15:41:27.620492  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:41:34.577445  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m56.636397544s)
--- PASS: TestNetworkPlugins/group/bridge/Start (116.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2fxxn" [26c4d67d-456c-4270-855f-d8b802869b3f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.022563769s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-492851 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-492851 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cbgwz" [5501dddd-0a0b-4c03-aa69-758eca5b1150] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cbgwz" [5501dddd-0a0b-4c03-aa69-758eca5b1150] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.019227671s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-492851 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m33.661049428s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m34.770899097s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-492851 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-492851 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wzgnt" [0b78f497-533b-4bcf-8a41-858752b0a964] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wzgnt" [0b78f497-533b-4bcf-8a41-858752b0a964] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.014841641s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-492851 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (106.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-492851 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m46.603920269s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (106.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-492851 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-492851 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zrdtl" [556eb0d1-a754-4c58-9973-4b231ca5b2bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zrdtl" [556eb0d1-a754-4c58-9973-4b231ca5b2bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.016212083s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-492851 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qxnjd" [37231560-03af-4554-8610-19ba277b8778] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.026668664s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rbtrd" [7ece4026-9779-44c5-829f-e5d557c48c9d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.028455501s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (157.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-842105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-842105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m37.81953861s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (157.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-492851 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-492851 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gr57q" [4ce285e0-03e8-483a-b19f-1b70c7245957] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gr57q" [4ce285e0-03e8-483a-b19f-1b70c7245957] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.013275601s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-492851 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-492851 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jhqtn" [b5df04a2-6e04-4e3f-9c09-3588b0f0297c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jhqtn" [b5df04a2-6e04-4e3f-9c09-3588b0f0297c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.012158895s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-492851 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-492851 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (89.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-490998 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-490998 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m29.12524229s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (89.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-279880 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-279880 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m36.998883984s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-492851 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-492851 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-clc75" [1c68aae5-4e57-4770-9171-9ea8fd3e2d46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-clc75" [1c68aae5-4e57-4770-9171-9ea8fd3e2d46] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.016081412s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-492851 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-492851 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (117.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-529430 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1114 15:45:55.158795  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:45:55.164228  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:45:55.174583  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:45:55.194951  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:45:55.235273  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:45:55.315624  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:45:55.475967  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-529430 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m57.437718773s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (117.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-490998 create -f testdata/busybox.yaml
E1114 15:45:55.796879  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e3e83d9b-269c-42eb-bf93-69165d1dec5a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1114 15:45:56.437669  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:45:57.718804  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e3e83d9b-269c-42eb-bf93-69165d1dec5a] Running
E1114 15:46:00.279547  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.038488665s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-490998 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-490998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1114 15:46:05.400321  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-490998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.266809209s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-490998 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-279880 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e6474ed-c3c3-4e95-8760-eb4c5deb9eae] Pending
helpers_test.go:344: "busybox" [6e6474ed-c3c3-4e95-8760-eb4c5deb9eae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1114 15:46:10.674192  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
helpers_test.go:344: "busybox" [6e6474ed-c3c3-4e95-8760-eb4c5deb9eae] Running
E1114 15:46:15.640971  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.028102868s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-279880 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-279880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-279880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.142801548s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-279880 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-842105 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e92974bd-1600-47d5-a2cc-090a314edab7] Pending
helpers_test.go:344: "busybox" [e92974bd-1600-47d5-a2cc-090a314edab7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e92974bd-1600-47d5-a2cc-090a314edab7] Running
E1114 15:46:34.577592  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 15:46:36.122108  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:46:36.376571  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:36.381870  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:36.392163  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:36.412420  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:36.452824  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:36.533209  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:36.693678  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:37.013904  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:46:37.654730  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.036080988s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-842105 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-842105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1114 15:46:38.934868  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-842105 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-529430 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e] Pending
helpers_test.go:344: "busybox" [1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1cf7c496-9fce-4ecb-82d1-f78f57ab3c8e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.036066754s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-529430 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-529430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1114 15:47:17.083352  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:47:17.338909  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-529430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.06898089s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-529430 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (698.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-490998 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-490998 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (11m38.498228254s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-490998 -n no-preload-490998
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (698.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (608.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-279880 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1114 15:48:51.250335  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:52.669166  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 15:48:53.607529  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:53.612890  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:53.623233  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:53.643547  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:53.683872  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:53.764245  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:53.811536  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:48:53.924822  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:54.245884  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:54.886123  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:56.166744  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:58.727697  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:48:58.932114  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-279880 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (10m8.338587035s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-279880 -n embed-certs-279880
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (608.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (702.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-842105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1114 15:49:14.089940  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-842105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (11m42.42208221s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-842105 -n old-k8s-version-842105
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (702.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (565.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-529430 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1114 15:50:00.133726  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:50:05.065229  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:50:10.614430  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:50:15.531240  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:50:20.614184  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:50:55.158598  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:51:01.575100  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:51:06.755966  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:51:22.845649  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:51:27.621005  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:51:32.534640  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:51:34.577220  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 15:51:36.376432  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:51:37.451660  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:52:04.061253  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:52:21.221427  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:52:23.495680  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:52:48.906292  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:53:22.913190  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:53:48.692380  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:53:50.596888  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:53:52.669090  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 15:53:53.606866  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:54:16.375796  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:54:21.292630  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
E1114 15:54:39.653319  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:55:07.336041  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/custom-flannel-492851/client.crt: no such file or directory
E1114 15:55:55.158073  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/auto-492851/client.crt: no such file or directory
E1114 15:56:27.620592  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/functional-593453/client.crt: no such file or directory
E1114 15:56:34.577113  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 15:56:36.376558  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/flannel-492851/client.crt: no such file or directory
E1114 15:57:21.221256  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/enable-default-cni-492851/client.crt: no such file or directory
E1114 15:57:57.629505  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/addons-317784/client.crt: no such file or directory
E1114 15:58:22.912536  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/bridge-492851/client.crt: no such file or directory
E1114 15:58:48.692416  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/calico-492851/client.crt: no such file or directory
E1114 15:58:52.668464  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/ingress-addon-legacy-944535/client.crt: no such file or directory
E1114 15:58:53.606876  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/kindnet-492851/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-529430 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (9m24.870146043s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-529430 -n default-k8s-diff-port-529430
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (565.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-161256 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-161256 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m1.967745745s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-161256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-161256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.515865351s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-161256 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-161256 --alsologtostderr -v=3: (11.136711366s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-161256 -n newest-cni-161256
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-161256 -n newest-cni-161256: exit status 7 (87.045705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-161256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-161256 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-161256 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (51.267284428s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-161256 -n newest-cni-161256
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-161256 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-161256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-161256 -n newest-cni-161256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-161256 -n newest-cni-161256: exit status 2 (261.591011ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-161256 -n newest-cni-161256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-161256 -n newest-cni-161256: exit status 2 (265.196636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-161256 --alsologtostderr -v=1
E1114 16:16:06.231079  832211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17598-824991/.minikube/profiles/no-preload-490998/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-161256 -n newest-cni-161256
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-161256 -n newest-cni-161256
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                    

Test skip (36/292)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.3/cached-images 0
13 TestDownloadOnly/v1.28.3/binaries 0
14 TestDownloadOnly/v1.28.3/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
234 TestNetworkPlugins/group/kubenet 3.46
243 TestNetworkPlugins/group/cilium 3.76
258 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-492851 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-492851" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-492851

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492851"

                                                
                                                
----------------------- debugLogs end: kubenet-492851 [took: 3.306715649s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-492851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-492851
--- SKIP: TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-492851 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-492851" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-492851

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-492851" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492851"

                                                
                                                
----------------------- debugLogs end: cilium-492851 [took: 3.599559058s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-492851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-492851
--- SKIP: TestNetworkPlugins/group/cilium (3.76s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-331502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-331502
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard